path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
playground/disease_gene/generative_model_experiments/gen_model_benchmarking.ipynb | ###Markdown
Generative Model Benchmarking The goal here is to use the [data programing paradigm](https://arxiv.org/abs/1605.07723) to probabilistically label our training dataset for the disease associates gene relationship. The label functions have already been generated and now it is time to train the generative model. This model captures important features such as agreements and disagreements between label functions, by estimating the probability of label functions emitting a combination of labels given the class. $P(\lambda_{i} = j \mid Y=y)$. More information can be found in this [technical report](https://arxiv.org/pdf/1810.02840.pdf) or in this [paper](https://ajratner.github.io/assets/papers/deem-metal-prototype.pdf). The testable hypothesis here is: **Incorporating multiple weak sources improves performance compared to the normal distant supervision approach, which uses a single resource for labels**. Experimental Design:Compares three different models. The first model uses four databases (DisGeNET, Diseases, DOAF and GWAS) as the distant supervision approach. The second model uses the above databases with user defined rules such as (regular expressions, trigger word identification and sentence contextual rules). The last model uses the above sources of information in conjunction with biclustering data obtained from this [paper](https://www.ncbi.nlm.nih.gov/pubmed/29490008). Dataset | Set type | Size ||:---|:---|| Train | 50k || Dev | 500 (hand labeled) | Set up The Environment The few blocks below sets up our python environment to perform the experiment.
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
from itertools import product
import os
import pickle
import sys
sys.path.append(os.path.abspath('../../../modules'))
import matplotlib.pyplot as plt
import pandas as pd
from tqdm import tqdm_notebook
#Set up the environment
username = "danich1"
password = "snorkel"
dbname = "pubmeddb"
#Path subject to change for different os
database_str = "postgresql+psycopg2://{}:{}@/{}?host=/var/run/postgresql".format(username, password, dbname)
os.environ['SNORKELDB'] = database_str
from snorkel import SnorkelSession
session = SnorkelSession()
from snorkel.annotations import LabelAnnotator
from snorkel.learning.structure import DependencySelector
from snorkel.models import candidate_subclass
from metal.analysis import confusion_matrix, lf_summary
from metal.label_model import LabelModel
from metal.utils import convert_labels
from metal.contrib.visualization.analysis import(
plot_predictions_histogram,
)
from utils.label_functions.disease_gene_lf_multitask import DG_LFS
from utils.notebook_utils.dataframe_helper import load_candidate_dataframes
from utils.notebook_utils.label_matrix_helper import (
get_auc_significant_stats,
get_overlap_matrix,
get_conflict_matrix,
label_candidates
)
from utils.notebook_utils.train_model_helper import train_generative_model
from utils.notebook_utils.plot_helper import (
plot_label_matrix_heatmap,
plot_curve,
plot_generative_model_weights,
)
DiseaseGene = candidate_subclass('DiseaseGene', ['Disease', 'Gene'])
quick_load = True
###Output
_____no_output_____
###Markdown
Load the data for Generative Model Experiments
###Code
spreadsheet_names = {
'train': 'data/sentence_labels_train.xlsx',
'dev': 'data/sentence_labels_dev.xlsx',
}
candidate_dfs = {
key:load_candidate_dataframes(spreadsheet_names[key])
for key in spreadsheet_names
}
for key in candidate_dfs:
print("Size of {} set: {}".format(key, candidate_dfs[key].shape[0]))
label_functions = (
list(DG_LFS["DaG"].values())
)
if quick_load:
label_matricies = pickle.load(open("data/label_matricies.pkl", "rb"))
else:
label_matricies = {
key:label_candidates(
session,
candidate_dfs[key]['candidate_id'],
label_functions,
num_threads=10,
batch_size=candidate_dfs[key]['candidate_id'].shape[0]
)
for key in candidate_dfs
}
pickle.dump(label_matricies, open("data/label_matricies.pkl", "wb"))
lf_names = list(DG_LFS["DaG"].keys())
###Output
_____no_output_____
###Markdown
Visualize Label Functions Before training the generative model, here are some visualizations for the given label functions. These visualizations are helpful in determining the efficacy of each label functions as well as observing the overlaps and conflicts between each function.
###Code
plt.rcParams.update({'font.size': 10})
plot_label_matrix_heatmap(label_matricies['train'].T,
yaxis_tick_labels=lf_names,
figsize=(10,8), font_size=10)
###Output
_____no_output_____
###Markdown
Looking at the heatmap above, this is a decent distribution of labels. Some of the label functions are covering a lot of data points (distant supervision ones) and some are very sparse in their output.
###Code
plot_label_matrix_heatmap(get_overlap_matrix(label_matricies['train'], normalize=True),
yaxis_tick_labels=lf_names, xaxis_tick_labels=lf_names,
figsize=(10,8), colorbar=False, plot_title="Overlap Matrix")
###Output
_____no_output_____
###Markdown
The overlap matrix above shows how two label functions overlap with each other. The brighter the color the more overlaps a label function has with another label function.
###Code
plot_label_matrix_heatmap(get_conflict_matrix(label_matricies['train'], normalize=True),
yaxis_tick_labels=lf_names, xaxis_tick_labels=lf_names,
figsize=(10,8), colorbar=False, plot_title="Conflict Matrix")
###Output
_____no_output_____
###Markdown
The conflict matrix above shows how often label functions conflict with each other. The brighter the color the more conflict a label function has with another function. Ignoring the diagonals, there isn't many conflicts between functions except for the LF_DG_NO_CONCLUSION and LF_DG_ALLOWED_DISTANCE. Train the Generative Model After visualizing the label functions and their associated properties, now it is time to work on the generative model. As with common machine learning pipelines, the first step is to find the best hyperparameters for this model. Using the grid search algorithm, the follow parameters were optimized: amount of burnin, strength of regularization, number of epochs to run the model. Set the hyperparameter grid search
###Code
regularization_grid = pd.np.round(pd.np.linspace(0.1, 6, num=25), 3)
###Output
_____no_output_____
###Markdown
What are the best hyperparameters for the conditionally independent model?
###Code
L = convert_labels(label_matricies['train'].toarray(), 'plusminus', 'categorical')
L_dev = convert_labels(label_matricies['dev'].toarray(), 'plusminus', 'categorical')
L_test = convert_labels(label_matricies['test'].toarray(), 'plusminus', 'categorical')
validation_data = list(
zip(
[L[:,:7], L[:, :24], L],
[L_dev[:,:7], L_dev[:, :24], L_dev]
)
)
test_data = list(
zip(
[L[:,:7], L[:, :24], L],
[L_test[:,:7], L_test[:, :24], L_test]
)
)
model_labels = ["Knowledge Bases (KB)", "KB+Text Patterns", "All"]
model_grid_search = {}
for model_data, model_label in zip(validation_data, model_labels):
label_model = LabelModel(k=2, seed=100)
grid_results = {}
for param in regularization_grid:
label_model.train_model(model_data[0], n_epochs=1000, verbose=False, lr=0.01, l2=param)
grid_results[str(param)] = label_model.predict_proba(model_data[1])[:,0]
model_grid_search[model_label] = pd.DataFrame.from_dict(grid_results)
model_grid_aucs = {}
for model in model_grid_search:
model_grid_aucs[model] = plot_curve(model_grid_search[model], candidate_dfs['dev'].curated_dsh,
figsize=(16,6), model_type='scatterplot', plot_title=model, metric="ROC", font_size=10)
model_grid_auc_dfs = {}
for model in model_grid_aucs:
model_grid_auc_dfs[model] = (
get_auc_significant_stats(candidate_dfs['dev'], model_grid_aucs[model])
.sort_values('auroc', ascending=False)
)
print(model)
print(model_grid_auc_dfs[model].head(5))
print()
###Output
mu: 26250.000000, sigma: 1480.498227
Distant Supervision (DS)
auroc u z_u p_value
3.05 0.560895 29447.0 2.159408 0.015409
3.296 0.560895 29447.0 2.159408 0.015409
3.542 0.560895 29447.0 2.159408 0.015409
5.754 0.560267 29414.0 2.137118 0.016294
5.508 0.560267 29414.0 2.137118 0.016294
mu: 26250.000000, sigma: 1480.498227
DS+User Defined Rules
auroc u z_u p_value
1.821 0.655390 34408.0 5.510307 1.791040e-08
1.575 0.654419 34357.0 5.475859 2.176968e-08
2.067 0.654419 34357.0 5.475859 2.176968e-08
2.313 0.653638 34316.0 5.448166 2.544594e-08
1.329 0.653505 34309.0 5.443438 2.613100e-08
mu: 26250.000000, sigma: 1480.498227
All
auroc u z_u p_value
2.313 0.613943 32232.0 4.040532 0.000027
2.067 0.613876 32228.5 4.038168 0.000027
1.821 0.613743 32221.5 4.033439 0.000027
2.558 0.613419 32204.5 4.021957 0.000029
3.05 0.613314 32199.0 4.018242 0.000029
###Markdown
Final Evaluation on Held out Hand Labeled Test Data
###Code
dev_model_df = pd.DataFrame()
best_hyper_parameters = [1.083, 2.067, 1.575]
for best_model, model_data, model_label in zip(best_hyper_parameters, validation_data, model_labels):
label_model = LabelModel(k=2, seed=100)
label_model.train_model(model_data[0] , n_epochs=1000, verbose=False, lr=0.01, l2=best_model)
dev_model_df[model_label] = label_model.predict_proba(model_data[1])[:,0]
_ = plot_curve(
dev_model_df,
candidate_dfs['dev'].curated_dsh,
model_type='curve', figsize=(10,8),
plot_title="Disease Associates Gene AUROC on Dev Set", font_size=16
)
_ = plot_curve(
dev_model_df,
candidate_dfs['dev'].curated_dsh,
model_type='curve', figsize=(12,7),
plot_title="DaG Precision Recall Curve on Dev",
metric='PR', font_size=16
)
label_model = LabelModel(k=2, seed=100)
label_model.train_model(validation_data[1][0], n_epochs=1000, verbose=False, lr=0.01, l2=2.067)
dev_predictions = convert_labels(label_model.predict(validation_data[1][1]), 'categorical', 'onezero')
dev_marginals = label_model.predict_proba(validation_data[1][1])[:,0]
plt.rcParams.update({'font.size': 16})
plt.figure(figsize=(10,6))
plot_predictions_histogram(
dev_predictions,
candidate_dfs['dev'].curated_dsh.astype(int).values,
title="Prediction Histogram for Dev Set"
)
confusion_matrix(
convert_labels(candidate_dfs['dev'].curated_dsh.values, 'onezero', 'categorical'),
convert_labels(dev_predictions, 'onezero', 'categorical')
)
lf_summary(label_matricies['dev'], Y=candidate_dfs['dev'].curated_dsh.apply(lambda x: 1 if x > 0 else 2).values, lf_names=lf_names)
plot_label_matrix_heatmap(convert_labels(label_matricies['dev'].toarray(), 'categorical', 'plusminus').T,
yaxis_tick_labels=lf_names,
figsize=(10,12), font_size=10)
output_file = "data/train_marginals.pkl"
pickle.dump(label_model.predict_proba(L[:, :24]), open(output_file, "wb"))
###Output
_____no_output_____ |
4_2_Robot_Localization/6_1. Move Function, exercise.ipynb | ###Markdown
Move FunctionNow that you know how a robot uses sensor measurements to update its idea of its own location, let's see how we can incorporate motion into this location. In this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing, moving and updating that distribution.We include the `sense` function that you've seen, which updates an initial distribution based on whether a robot senses a grid color: red or green. Next, you're tasked with writing a function `move` that incorporates motion into the distribution. As seen below, **one motion `U= 1` to the right, causes all values in a distribution to shift one grid cell to the right.** First let's include our usual resource imports and display function.
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
You are given the initial variables and the complete `sense` function, below.
###Code
# given initial variables
p=[0, 1, 0, 0, 0]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
# You are given the complete sense function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns a *normalized* distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
# sum up all the components
s = sum(q)
# divide all elements of q by the sum to normalize
for i in range(len(p)):
q[i] = q[i] / s
return q
# Commented out code for measurements
# for k in range(len(measurements)):
# p = sense(p, measurements)
###Output
_____no_output_____
###Markdown
QUIZ: Program a function that returns a new distribution q, shifted to the right by the motion (U) units. This function should shift a distribution with the motion, U. Keep in mind that this world is cyclic and that if U=0, q should be the same as the given p. You should see all the values in `p` are moved to the right by 1, for U=1.
###Code
## TODO: Complete this move function so that it shifts a probability distribution, p
## by a given motion, U
def move(p, U):
assert U == 0 or U == 1 , 'Motion should equal 0 or 1'
q= p
if U == 1:
q = [q[-1]] + q[:-1]
return q
p = move(p,1)
print(p)
display_map(p)
###Output
[0, 0, 1, 0, 0]
###Markdown
Move FunctionNow that you know how a robot uses sensor measurements to update its idea of its own location, let's see how we can incorporate motion into this location. In this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing, moving and updating that distribution.We include the `sense` function that you've seen, which updates an initial distribution based on whether a robot senses a grid color: red or green. Next, you're tasked with writing a function `move` that incorporates motion into the distribution. As seen below, **one motion `U= 1` to the right, causes all values in a distribution to shift one grid cell to the right.** First let's include our usual resource imports and display function.
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
You are given the initial variables and the complete `sense` function, below.
###Code
# given initial variables
p=[0, 1, 0, 0, 0]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
# You are given the complete sense function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns a *normalized* distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
# sum up all the components
s = sum(q)
# divide all elements of q by the sum to normalize
for i in range(len(p)):
q[i] = q[i] / s
return q
# Commented out code for measurements
# for k in range(len(measurements)):
# p = sense(p, measurements)
###Output
_____no_output_____
###Markdown
QUIZ: Program a function that returns a new distribution q, shifted to the right by the motion (U) units. This function should shift a distribution with the motion, U. Keep in mind that this world is cyclic and that if U=0, q should be the same as the given p. You should see all the values in `p` are moved to the right by 1, for U=1.
###Code
## TODO: Complete this move function so that it shifts a probability distribution, p
## by a given motion, U
def move(p, U):
q=[]
for i in range(len(p)):
index = (i-U)
q.append(p[index])
# Your code here
return q
p = move(p,1)
print(p)
display_map(p)
###Output
[0, 0, 1, 0, 0]
###Markdown
Move FunctionNow that you know how a robot uses sensor measurements to update its idea of its own location, let's see how we can incorporate motion into this location. In this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing, moving and updating that distribution.We include the `sense` function that you've seen, which updates an initial distribution based on whether a robot senses a grid color: red or green. Next, you're tasked with writing a function `move` that incorporates motion into the distribution. As seen below, **one motion `U= 1` to the right, causes all values in a distribution to shift one grid cell to the right.** First let's include our usual resource imports and display function.
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
You are given the initial variables and the complete `sense` function, below.
###Code
# given initial variables
p=[0, 1, 0, 0, 0]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
# You are given the complete sense function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns a *normalized* distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
# sum up all the components
s = sum(q)
# divide all elements of q by the sum to normalize
for i in range(len(p)):
q[i] = q[i] / s
return q
# Commented out code for measurements
# for k in range(len(measurements)):
# p = sense(p, measurements)
###Output
_____no_output_____
###Markdown
QUIZ: Program a function that returns a new distribution q, shifted to the right by the motion (U) units. This function should shift a distribution with the motion, U. Keep in mind that this world is cyclic and that if U=0, q should be the same as the given p. You should see all the values in `p` are moved to the right by 1, for U=1.
###Code
p=[0, 1, 0, 0, 0]
## TODO: Complete this move function so that it shifts a probability distribution, p
## by a given motion, U
def move(p, U):
# Calculate length of the array
length = len(p)
# Establish number of steps
steps = abs(U)%length
# Choose the direction
isPositive = True if U > 0 else False
if steps == 0:
return p # if no change return unchanged p
elif isPositive:
return p[-steps:] + p[:length-steps] # formula for shifting array in right
else:
return p[steps - length:] + p[:steps] # formula for shifting array in left
def move2(p, U):
# length of the array p
leng = len(p)
# in this approach we place elemnt i-1 on i posistion assuming cyclic list so
# for the first element (index = 0) we take i-1 element so the last one (index = 4)
return [p[(i - U) % leng] for i in range(leng) ]
# Both methods are working properly
p1 = move(p, -3)
p2 = move2(p, -3)
print(p1)
print(p2)
display_map(p1)
display_map(p2)
###Output
[0, 0, 0, 1, 0]
[0, 0, 0, 1, 0]
###Markdown
Move FunctionNow that you know how a robot uses sensor measurements to update its idea of its own location, let's see how we can incorporate motion into this location. In this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing, moving and updating that distribution.We include the `sense` function that you've seen, which updates an initial distribution based on whether a robot senses a grid color: red or green. Next, you're tasked with writing a function `move` that incorporates motion into the distribution. As seen below, **one motion `U= 1` to the right, causes all values in a distribution to shift one grid cell to the right.** First let's include our usual resource imports and display function.
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
You are given the initial variables and the complete `sense` function, below.
###Code
# given initial variables
p=[0, 1, 0, 0, 0]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
# You are given the complete sense function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns a *normalized* distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
# sum up all the components
s = sum(q)
# divide all elements of q by the sum to normalize
for i in range(len(p)):
q[i] = q[i] / s
return q
# Commented out code for measurements
# for k in range(len(measurements)):
# p = sense(p, measurements)
###Output
_____no_output_____
###Markdown
QUIZ: Program a function that returns a new distribution q, shifted to the right by the motion (U) units. This function should shift a distribution with the motion, U. Keep in mind that this world is cyclic and that if U=0, q should be the same as the given p. You should see all the values in `p` are moved to the right by 1, for U=1.
###Code
## TODO: Complete this move function so that it shifts a probability distribution, p
## by a given motion, U
def move(p, U):
q=[]
# Your code here
return q
p = move(p,1)
print(p)
display_map(p)
###Output
_____no_output_____
###Markdown
Move FunctionNow that you know how a robot uses sensor measurements to update its idea of its own location, let's see how we can incorporate motion into this location. In this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing, moving and updating that distribution.We include the `sense` function that you've seen, which updates an initial distribution based on whether a robot senses a grid color: red or green. Next, you're tasked with writing a function `move` that incorporates motion into the distribution. As seen below, **one motion `U= 1` to the right, causes all values in a distribution to shift one grid cell to the right.** First let's include our usual resource imports and display function.
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
You are given the initial variables and the complete `sense` function, below.
###Code
# given initial variables
p=[0, 1, 0, 0, 0]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
# You are given the complete sense function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns a *normalized* distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
# sum up all the components
s = sum(q)
# divide all elements of q by the sum to normalize
for i in range(len(p)):
q[i] = q[i] / s
return q
# Commented out code for measurements
# for k in range(len(measurements)):
# p = sense(p, measurements)
###Output
_____no_output_____
###Markdown
QUIZ: Program a function that returns a new distribution q, shifted to the right by the motion (U) units. This function should shift a distribution with the motion, U. Keep in mind that this world is cyclic and that if U=0, q should be the same as the given p. You should see all the values in `p` are moved to the right by 1, for U=1.
###Code
## TODO: Complete this move function so that it shifts a probability distribution, p
## by a given motion, U
def move(p, U):
# Your code here
U = U%len(p)
q = p[-U:] + p[:len(p)-U]
return q
p = move(p,1)
print(p)
display_map(p)
###Output
[0, 0, 1, 0, 0]
###Markdown
Move FunctionNow that you know how a robot uses sensor measurements to update its idea of its own location, let's see how we can incorporate motion into this location. In this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing, moving and updating that distribution.We include the `sense` function that you've seen, which updates an initial distribution based on whether a robot senses a grid color: red or green. Next, you're tasked with writing a function `move` that incorporates motion into the distribution. As seen below, **one motion `U= 1` to the right, causes all values in a distribution to shift one grid cell to the right.** First let's include our usual resource imports and display function.
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
You are given the initial variables and the complete `sense` function, below.
###Code
# given initial variables
p=[0, 1, 0, 0, 0]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
# You are given the complete sense function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns a *normalized* distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
# sum up all the components
s = sum(q)
# divide all elements of q by the sum to normalize
for i in range(len(p)):
q[i] = q[i] / s
return q
# Commented out code for measurements
# for k in range(len(measurements)):
# p = sense(p, measurements)
###Output
_____no_output_____
###Markdown
QUIZ: Program a function that returns a new distribution q, shifted to the right by the motion (U) units. This function should shift a distribution with the motion, U. Keep in mind that this world is cyclic and that if U=0, q should be the same as the given p. You should see all the values in `p` are moved to the right by 1, for U=1.
###Code
## TODO: Complete this move function so that it shifts a probability distribution, p
## by a given motion, U
def move(p, U):
q=[]
# Your code here
index = len(p) - U%len(p)
q = p[index:len(p)] + p[:index]
return q
p = move(p,1)
print(p)
display_map(p)
###Output
[0, 0, 1, 0, 0]
###Markdown
Move FunctionNow that you know how a robot uses sensor measurements to update its idea of its own location, let's see how we can incorporate motion into this location. In this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing, moving and updating that distribution.We include the `sense` function that you've seen, which updates an initial distribution based on whether a robot senses a grid color: red or green. Next, you're tasked with writing a function `move` that incorporates motion into the distribution. As seen below, **one motion `U= 1` to the right, causes all values in a distribution to shift one grid cell to the right.** First let's include our usual resource imports and display function.
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
You are given the initial variables and the complete `sense` function, below.
###Code
# given initial variables
p=[0, 1, 0, 0, 0]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
# You are given the complete sense function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns a *normalized* distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
# sum up all the components
s = sum(q)
# divide all elements of q by the sum to normalize
for i in range(len(p)):
q[i] = q[i] / s
return q
# Commented out code for measurements
# for k in range(len(measurements)):
# p = sense(p, measurements)
###Output
_____no_output_____
###Markdown
QUIZ: Program a function that returns a new distribution q, shifted to the right by the motion (U) units. This function should shift a distribution with the motion, U. Keep in mind that this world is cyclic and that if U=0, q should be the same as the given p. You should see all the values in `p` are moved to the right by 1, for U=1.
###Code
## TODO: Complete this move function so that it shifts a probability distribution, p
## by a given motion, U
def move(p, U):
q=p.copy()
l = len(p)
for i in range(l):
q[i] = p[(i - U) % l]
return q
p = move(p,1)
print(p)
display_map(p)
###Output
[0, 0, 0, 1, 0]
###Markdown
Move FunctionNow that you know how a robot uses sensor measurements to update its idea of its own location, let's see how we can incorporate motion into this location. In this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing, moving and updating that distribution.We include the `sense` function that you've seen, which updates an initial distribution based on whether a robot senses a grid color: red or green. Next, you're tasked with writing a function `move` that incorporates motion into the distribution. As seen below, **one motion `U= 1` to the right, causes all values in a distribution to shift one grid cell to the right.** First let's include our usual resource imports and display function.
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
You are given the initial variables and the complete `sense` function, below.
###Code
# given initial variables
p=[0, 1, 0, 0, 0]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
# You are given the complete sense function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns a *normalized* distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
# sum up all the components
s = sum(q)
# divide all elements of q by the sum to normalize
for i in range(len(p)):
q[i] = q[i] / s
return q
# Commented out code for measurements
# for k in range(len(measurements)):
# p = sense(p, measurements)
###Output
_____no_output_____
###Markdown
QUIZ: Program a function that returns a new distribution q, shifted to the right by the motion (U) units. This function should shift a distribution with the motion, U. Keep in mind that this world is cyclic and that if U=0, q should be the same as the given p. You should see all the values in `p` are moved to the right by 1, for U=1.
###Code
## TODO: Complete this move function so that it shifts a probability distribution, p
## by a given motion, U
def move(p, U):
q = []
# Your code here
for i in range(len(p)):
p[i]=p[i+U]
return q
p = move(p,1)
print(p)
display_map(p)
###Output
[]
Grid is empty
###Markdown
Move FunctionNow that you know how a robot uses sensor measurements to update its idea of its own location, let's see how we can incorporate motion into this location. In this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing, moving and updating that distribution.We include the `sense` function that you've seen, which updates an initial distribution based on whether a robot senses a grid color: red or green. Next, you're tasked with writing a function `move` that incorporates motion into the distribution. As seen below, **one motion `U= 1` to the right, causes all values in a distribution to shift one grid cell to the right.** First let's include our usual resource imports and display function.
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
You are given the initial variables and the complete `sense` function, below.
###Code
# given initial variables
p=[0, 1, 0, 0, 0]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
# You are given the complete sense function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns a *normalized* distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
# sum up all the components
s = sum(q)
# divide all elements of q by the sum to normalize
for i in range(len(p)):
q[i] = q[i] / s
return q
# Commented out code for measurements
# for k in range(len(measurements)):
# p = sense(p, measurements)
###Output
_____no_output_____
###Markdown
QUIZ: Program a function that returns a new distribution q, shifted to the right by the motion (U) units. This function should shift a distribution with the motion, U. Keep in mind that this world is cyclic and that if U=0, q should be the same as the given p. You should see all the values in `p` are moved to the right by 1, for U=1.
###Code
## Complete this move function so that it shifts a probability distribution, p
## by a given motion, U
def move(p, U):
q=[]
print(2%5)
for i in range(len(p)):
q.append(p[(i-U)%len(p)])
return q
print(f'start{p}')
p = move(p,1)
print(p)
p = move(p,1)
print(p)
p = move(p,1)
print(p)
p = move(p,1)
print(p)
p = move(p,1)
print(p)
display_map(p)
###Output
start[0, 1, 0, 0, 0]
2
[0, 0, 1, 0, 0]
2
[0, 0, 0, 1, 0]
2
[0, 0, 0, 0, 1]
2
[1, 0, 0, 0, 0]
2
[0, 1, 0, 0, 0]
###Markdown
Move FunctionNow that you know how a robot uses sensor measurements to update its idea of its own location, let's see how we can incorporate motion into this location. In this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing, moving and updating that distribution.We include the `sense` function that you've seen, which updates an initial distribution based on whether a robot senses a grid color: red or green. Next, you're tasked with writing a function `move` that incorporates motion into the distribution. As seen below, **one motion `U= 1` to the right, causes all values in a distribution to shift one grid cell to the right.** First let's include our usual resource imports and display function.
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
You are given the initial variables and the complete `sense` function, below.
###Code
# given initial variables
p=[0, 1, 0, 0, 0]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
# You are given the complete sense function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns a *normalized* distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
# sum up all the components
s = sum(q)
# divide all elements of q by the sum to normalize
for i in range(len(p)):
q[i] = q[i] / s
return q
# Commented out code for measurements
#for k in range(len(measurements)):
# p = sense(p, Z)
###Output
_____no_output_____
###Markdown
QUIZ: Program a function that returns a new distribution q, shifted to the right by the motion (U) units. This function should shift a distribution with the motion, U. Keep in mind that this world is cyclic and that if U=0, q should be the same as the given p. You should see all the values in `p` are moved to the right by 1, for U=1.
###Code
## TODO: Complete this move function so that it shifts a probability distribution, p
## by a given motion, U
def move(p, U):
q=[0]* len(p)
# Your code here
for idx in range(len(p)):
q[(idx + U) % len(p)] = p[idx]
return q
p = move(p,1)
print(p)
display_map(p)
###Output
[0, 0, 1, 0, 0]
###Markdown
Move FunctionNow that you know how a robot uses sensor measurements to update its idea of its own location, let's see how we can incorporate motion into this location. In this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing, moving and updating that distribution.We include the `sense` function that you've seen, which updates an initial distribution based on whether a robot senses a grid color: red or green. Next, you're tasked with writing a function `move` that incorporates motion into the distribution. As seen below, **one motion `U= 1` to the right, causes all values in a distribution to shift one grid cell to the right.** First let's include our usual resource imports and display function.
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
You are given the initial variables and the complete `sense` function, below.
###Code
# given initial variables
p=[0, 1, 0, 0, 0]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
# You are given the complete sense function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns a *normalized* distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
# sum up all the components
s = sum(q)
# divide all elements of q by the sum to normalize
for i in range(len(p)):
q[i] = q[i] / s
return q
# Commented out code for measurements
# for k in range(len(measurements)):
# p = sense(p, measurements)
###Output
_____no_output_____
###Markdown
QUIZ: Program a function that returns a new distribution q, shifted to the right by the motion (U) units. This function should shift a distribution with the motion, U. Keep in mind that this world is cyclic and that if U=0, q should be the same as the given p. You should see all the values in `p` are moved to the right by 1, for U=1.
###Code
## TODO: Complete this move function so that it shifts a probability distribution, p
## by a given motion, U
def move(p, U):
q = []
for i in range(0, len(p)):
index = (i-U) % len(p)
q.append(p[index])
return q
p=[0, 1, 0, 0, 0]
print(p)
p = move(p,1)
print(p)
display_map(p)
###Output
[0, 1, 0, 0, 0]
[0, 0, 1, 0, 0]
###Markdown
Move FunctionNow that you know how a robot uses sensor measurements to update its idea of its own location, let's see how we can incorporate motion into this location. In this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing, moving and updating that distribution.We include the `sense` function that you've seen, which updates an initial distribution based on whether a robot senses a grid color: red or green. Next, you're tasked with writing a function `move` that incorporates motion into the distribution. As seen below, **one motion `U= 1` to the right, causes all values in a distribution to shift one grid cell to the right.** First let's include our usual resource imports and display function.
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=0.9):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
You are given the initial variables and the complete `sense` function, below.
###Code
# given initial variables
p=[0, 1, 0, 0, 0]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
# You are given the complete sense function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns a *normalized* distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
# sum up all the components
s = sum(q)
# divide all elements of q by the sum to normalize
for i in range(len(p)):
q[i] = q[i] / s
return q
# Commented out code for measurements
# for k in range(len(measurements)):
# p = sense(p, measurements)
###Output
_____no_output_____
###Markdown
QUIZ: Program a function that returns a new distribution q, shifted to the right by the motion (U) units. This function should shift a distribution with the motion, U. Keep in mind that this world is cyclic and that if U=0, q should be the same as the given p. You should see all the values in `p` are moved to the right by 1, for U=1.
###Code
## TODO: Complete this move function so that it shifts a probability distribution, p
## by a given motion, U
def move(p, U):
q=[]
# Your code here
if U == 0:
return p
q = p[-U:]
q.extend(p[:-U])
return q
p = move(p,4)
print(p)
display_map(p)
###Output
[1, 0, 0, 0, 0]
|
2 - Convolutional Neural Networks in TensorFlow/Week 3/Course 2 Week 3.ipynb | ###Markdown
Exercise Descriptionshttps://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/Exercises/Exercise%207%20-%20Transfer%20Learning/Exercise%207%20-%20Question.ipynbscrollTo=Blhq2MAUeyGA Answershttps://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/Exercises/Exercise%206%20-%20Cats%20v%20Dogs%20with%20Augmentation/Exercise%206%20-%20Answer.ipynb Tutorialhttps://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/Course%202%20-%20Part%206%20-%20Lesson%203%20-%20Notebook.ipynb
###Code
# Import all the necessary files!
import os
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import Model
# Download the inception v3 weights
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 \
-O /tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5
# Import the inception model
from tensorflow.keras.applications.inception_v3 import InceptionV3
# Create an instance of the inception model from the local pre-trained weights
local_weights_file = '/tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5'
pre_trained_model = InceptionV3(input_shape = (150, 150, 3),
include_top = False,
weights = None)
pre_trained_model.load_weights(local_weights_file)
# Make all the layers in the pre-trained model non-trainable
for layer in pre_trained_model.layers:
layer.trainable = False
# Print the model summary
pre_trained_model.summary()
# Expected Output is extremely large, but should end with:
#batch_normalization_v1_281 (Bat (None, 3, 3, 192) 576 conv2d_281[0][0]
#__________________________________________________________________________________________________
#activation_273 (Activation) (None, 3, 3, 320) 0 batch_normalization_v1_273[0][0]
#__________________________________________________________________________________________________
#mixed9_1 (Concatenate) (None, 3, 3, 768) 0 activation_275[0][0]
# activation_276[0][0]
#__________________________________________________________________________________________________
#concatenate_5 (Concatenate) (None, 3, 3, 768) 0 activation_279[0][0]
# activation_280[0][0]
#__________________________________________________________________________________________________
#activation_281 (Activation) (None, 3, 3, 192) 0 batch_normalization_v1_281[0][0]
#__________________________________________________________________________________________________
#mixed10 (Concatenate) (None, 3, 3, 2048) 0 activation_273[0][0]
# mixed9_1[0][0]
# concatenate_5[0][0]
# activation_281[0][0]
#==================================================================================================
#Total params: 21,802,784
#Trainable params: 0
#Non-trainable params: 21,802,784
last_layer = pre_trained_model.get_layer('mixed7')
print('last layer output shape: ', last_layer.output_shape)
last_output = last_layer.output
# Expected Output:
# ('last layer output shape: ', (None, 7, 7, 768))
# Define a Callback class that stops training once accuracy reaches 99.9%
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('acc')>0.999):
print("\nReached 99.9% accuracy so cancelling training!")
self.model.stop_training = True
from tensorflow.keras.optimizers import RMSprop
# Flatten the output layer to 1 dimension
x = layers.Flatten()(last_output)
# Add a fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dense(1024, activation='relu')(x)
# Add a dropout rate of 0.2
x = layers.Dropout(0.2)(x)
# Add a final sigmoid layer for classification
x = layers.Dense (1, activation='sigmoid')(x)
model = Model( pre_trained_model.input, x)
model.compile(optimizer = RMSprop(lr=0.0001),
loss = 'binary_crossentropy',
metrics = ['acc'])
model.summary()
# Expected output will be large. Last few lines should be:
# mixed7 (Concatenate) (None, 7, 7, 768) 0 activation_248[0][0]
# activation_251[0][0]
# activation_256[0][0]
# activation_257[0][0]
# __________________________________________________________________________________________________
# flatten_4 (Flatten) (None, 37632) 0 mixed7[0][0]
# __________________________________________________________________________________________________
# dense_8 (Dense) (None, 1024) 38536192 flatten_4[0][0]
# __________________________________________________________________________________________________
# dropout_4 (Dropout) (None, 1024) 0 dense_8[0][0]
# __________________________________________________________________________________________________
# dense_9 (Dense) (None, 1) 1025 dropout_4[0][0]
# ==================================================================================================
# Total params: 47,512,481
# Trainable params: 38,537,217
# Non-trainable params: 8,975,264
# Get the Horse or Human dataset
!wget --no-check-certificate https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip -O /tmp/horse-or-human.zip
# Get the Horse or Human Validation dataset
!wget --no-check-certificate https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip -O /tmp/validation-horse-or-human.zip
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import zipfile
local_zip = '//tmp/horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/training')
zip_ref.close()
local_zip = '//tmp/validation-horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/validation')
zip_ref.close()
train_horses_dir = os.path.join(train_dir, 'horses') # Directory with our training horse pictures
train_humans_dir = os.path.join(train_dir, 'humans') # Directory with our training humans pictures
validation_horses_dir = os.path.join(validation_dir, 'horses') # Directory with our validation horse pictures
validation_humans_dir = os.path.join(validation_dir, 'humans')# Directory with our validation humanas pictures
train_horses_fnames = os.listdir(train_horses_dir)
train_humans_fnames = os.listdir(train_humans_dir)
validation_horses_fnames = os.listdir(validation_horses_dir)
validation_humans_fnames = os.listdir(validation_humans_dir)
print(len(train_horses_fnames))
print(len(train_humans_fnames))
print(len(validation_horses_fnames))
print(len(validation_humans_fnames))
# Expected Output:
# 500
# 527
# 128
# 128
# Define our example directories and files
train_dir = '/tmp/training'
validation_dir = '/tmp/validation'
# Add our data-augmentation parameters to ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255.,
rotation_range = 40,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator( rescale = 1.0/255. )
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(train_dir,
batch_size = 20,
class_mode = 'binary',
target_size = (150, 150))
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory( validation_dir,
batch_size = 20,
class_mode = 'binary',
target_size = (150, 150))
# Expected Output:
# Found 1027 images belonging to 2 classes.
# Found 256 images belonging to 2 classes.
# Run this and see how many epochs it should take before the callback
# fires, and stops training at 99.9% accuracy
# (It should take less than 100 epochs)
callbacks = myCallback()
history = model.fit_generator(
train_generator,
validation_data = validation_generator,
steps_per_epoch = 100,
epochs = 100,
validation_steps = 50,
verbose = 2,
callbacks=[callbacks])
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()
plt.show()
###Output
_____no_output_____ |
projects/modelingsteps/ModelingSteps_1through2_DL.ipynb | ###Markdown
Modeling Steps 1 - 2**By Neuromatch Academy**__Content creators:__ Marius 't Hart, Megan Peters, Paul Schrater, Gunnar Blohm__Content reviewers:__ Eric DeWitt, Tara van Viegen, Marius Pachitariu__Production editors:__ Ella Batty, Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** **Note that this is the same as [NMA-CN W1D2 Tutorial 1](https://github.com/NeuromatchAcademy/course-content/blob/master/tutorials/W1D2_ModelingPractice/W1D2_Tutorial1.ipynb) - we provide it here as well for ease of access.** --- ObjectivesWe deconstruct the modeling process and break it down into 10 easy steps. Following the thought process of these steps will help you design and complete a Deep Learning (DL) project.We assume that you have a general idea of a project in mind, i.e., a preliminary question, goal, and/or phenomenon you would like to investigate. These 10 steps were originally developed for computational neuroscience models; but they really apply to any research project. We will now work through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)).We provide 3 example projects:* a neuro theory model (if you're comp neuro inclined) - this is also our roleplay example! See the corresponding notebook [here](https://github.com/NeuromatchAcademy/course-content-dl/blob/main/projects/modelingsteps/TrainIllusionModelingProjectDL.ipynb)* a brain decoding model (simple logistic regression; if your data science inclined)! See the corresponding notebook [here](https://github.com/NeuromatchAcademy/course-content-dl/blob/main/projects/modelingsteps/TrainIllusionDataProjectDL.ipynb)* a movement classification model (Convolutional Neural Network; if you're DL inclined)! See the corresponding notebook [here](https://github.com/NeuromatchAcademy/course-content-dl/blob/main/projects/modelingsteps/Example_Deep_Learning_Project.ipynb)
###Code
# @title Video 0: 10 steps overview
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1uw411R7RR", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="9gw2lmnHY54", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Tutorial slides
# @markdown These are the slides for the *DL projects intro*
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/wm2q3/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
--- Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
# for random distributions:
from scipy.stats import norm, poisson
# for logistic regression:
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
# @title Plotting Functions
def rasterplot(spikes,movement,trial):
[movements, trials, neurons, timepoints] = np.shape(spikes)
trial_spikes = spikes[movement,trial,:,:]
trial_events = [((trial_spikes[x,:] > 0).nonzero()[0]-150)/100 for x in range(neurons)]
plt.figure()
dt=1/100
plt.eventplot(trial_events, linewidths=1);
plt.title('movement: %d - trial: %d'%(movement, trial))
plt.ylabel('neuron')
plt.xlabel('time [s]')
def plotCrossValAccuracies(accuracies):
f, ax = plt.subplots(figsize=(8, 3))
ax.boxplot(accuracies, vert=False, widths=.7)
ax.scatter(accuracies, np.ones(8))
ax.set(
xlabel="Accuracy",
yticks=[],
title=f"Average test accuracy: {accuracies.mean():.2%}"
)
ax.spines["left"].set_visible(False)
# @title Generate Data
# @markdown `generateSpikeTrains(seed=37)`
# @markdown `subsetPerception(spikes, seed=0)`
def generateSpikeTrains(seed=37):
gain = 2
neurons = 50
movements = [0, 1, 2]
repetitions = 800
np.random.seed(seed)
# set up the basic parameters:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop + dt, dt) # a time interval
Velocity_sigma = 0.5 # std dev of the velocity profile
Velocity_Profile = norm.pdf(t, 0, Velocity_sigma)/norm.pdf(0, 0, Velocity_sigma) # The Gaussian velocity profile, normalized to a peak of 1
# set up the neuron properties:
Gains = np.random.rand(neurons) * gain # random sensitivity between 0 and `gain`
FRs = (np.random.rand(neurons) * 60 ) - 10 # random base firing rate between -10 and 50
# output matrix will have this shape:
target_shape = [len(movements), repetitions, neurons, len(Velocity_Profile)]
# build matrix for spikes, first, they depend on the velocity profile:
Spikes = np.repeat(Velocity_Profile.reshape([1, 1, 1, len(Velocity_Profile)]),
len(movements)*repetitions*neurons, axis=2).reshape(target_shape)
# multiplied by gains:
S_gains = np.repeat(np.repeat(Gains.reshape([1, 1, neurons]),
len(movements)*repetitions, axis=1).reshape(target_shape[:3]),
len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes * S_gains
# and multiplied by the movement:
S_moves = np.repeat( np.array(movements).reshape([len(movements), 1, 1, 1]),
repetitions*neurons*len(Velocity_Profile),
axis=3 ).reshape(target_shape)
Spikes = Spikes * S_moves
# on top of a baseline firing rate:
S_FR = np.repeat(np.repeat(FRs.reshape([1, 1, neurons]),
len(movements)*repetitions, axis=1).reshape(target_shape[:3]),
len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes + S_FR
# can not run the poisson random number generator on input lower than 0:
Spikes = np.where(Spikes < 0, 0, Spikes)
# so far, these were expected firing rates per second, correct for dt:
Spikes = poisson.rvs(Spikes * dt)
return Spikes
def subsetPerception(spikes, seed=0):
movements = [0, 1, 2]
split = 400
subset = 40
hwin = 3
[num_movements, repetitions, neurons, timepoints] = np.shape(spikes)
decision = np.zeros([num_movements, repetitions])
# ground truth for logistic regression:
y_train = np.repeat([0, 1, 1], split)
y_test = np.repeat([0, 1, 1], repetitions - split)
m_train = np.repeat(movements, split)
m_test = np.repeat(movements, split)
# reproduce the time points:
dt = 1 / 100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt)
w_idx = list((abs(t) < (hwin*dt)).nonzero()[0])
w_0 = min(w_idx)
w_1 = max(w_idx) + 1 # python...0
# get the total spike counts from stationary and movement trials:
spikes_stat = np.sum(spikes[0, :, :, :], axis=2)
spikes_move = np.sum(spikes[1:, :, :, :], axis=3)
train_spikes_stat = spikes_stat[:split, :]
train_spikes_move = spikes_move[:, :split, :].reshape([-1 ,neurons])
test_spikes_stat = spikes_stat[split:, :]
test_spikes_move = spikes_move[:, split:, :].reshape([-1, neurons])
# data to use to predict y:
x_train = np.concatenate((train_spikes_stat, train_spikes_move))
x_test = np.concatenate(( test_spikes_stat, test_spikes_move))
# this line creates a logistics regression model object, and immediately fits it:
population_model = LogisticRegression(solver='liblinear',
random_state=seed).fit(x_train, y_train)
# solver, one of: 'liblinear', 'newton-cg', 'lbfgs', 'sag', and 'saga'
# some of those require certain other options
# print(population_model.coef_) # slope
# print(population_model.intercept_) # intercept
ground_truth = np.array(population_model.predict(x_test))
ground_truth = ground_truth.reshape([3, -1])
output = {}
output['perception'] = ground_truth
output['spikes'] = spikes[:, split:, :subset, :]
return output
def getData():
spikes = generateSpikeTrains()
dataset = subsetPerception(spikes=spikes)
return dataset
dataset = getData()
perception = dataset['perception']
spikes = dataset['spikes']
###Output
_____no_output_____
###Markdown
DemosWe will demo the modeling process to you based on the train illusion. The introductory video will explain the phenomenon to you. Then we will do roleplay to showcase some common pitfalls to you based on a computational modeling project around the train illusion. In addition to the computational model, we will also provide a data neuroscience project example to you so you can appreciate similarities and differences. Enjoy! DisclaimerThe pitfalls roleplay videos were developed for a computational neuroscience modeling project. But all steps and pitfalls also apply to Deep Learning projects. There is a DL joke throughout these videos; that does NOT mean we do not like or appreciate DL (on the contrary, otherwise we would not be teaching it here). But it should serve as a warning that DL is not a magic answer to all questions... 😉
###Code
# @title Video 1: Introduction to tutorial
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Mf4y1b7xS", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GyGNs1fLIYQ", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
---- Step 1: Finding a phenomenon and a question to ask about it
###Code
# @title Video 2: Asking a question
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1VK4y1M7dc", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="4Gl8X_y_uoA", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 1
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people have the wrong percept. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We asked the following (arbitrary) question for our demo project: "How do noisy vestibular estimates of motion lead to illusory percepts of self motion?"
</font>
'''
markdown2 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people mix this up. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We assume that we have build the train illusion model (see the other example project colab). That model predicts that accumulated sensory evidence from vestibular signals determines the decision of whether self-motion is experienced or not. We now have vestibular neuron data (simulated in our case, but let's pretend) and would like to see if that prediction holds true.
The data contains *N* neurons and *M* trials for each of 3 motion conditions: no self-motion, slowly accelerating self-motion and faster accelerating self-motion. In our data,
*N* = 40 and *M* = 400.
**So we can ask the following question**: "Does accumulated vestibular neuron activity correlate with self-motion judgements?"
</font>
'''
markdown3 = '''
## Step 1
<br>
<font size='3pt'>
There are many different questions we could ask with the MoVi dataset. We will start with a simple question: "Can we classify movements from skeletal motion data, and if so, which body parts are the most informative ones?"
Our goal is to perform a pilot study to see if this is possible in principle. We will therefore use "ground truth" skeletal motion data that has been computed using an inference algorithm (see MoVi paper). If this works out, then as a next step we might want to use the raw sensor data or even videos...
The ultimate goal could for example be to figure out which body parts to record movements from (e.g. is just a wristband enough?) to classify movement.
</font>
'''
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out3 = widgets.Output()
with out3:
display(Markdown(markdown3))
out = widgets.Tab([out1, out2, out3])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
out.set_title(2, 'Deep Learning')
display(out)
###Output
_____no_output_____
###Markdown
Asking your own question You should already have a project idea from your brainstorming yesterday. **Write down the phenomenon, question and goal(s) if you have them.** As a reminder, here is what you should discuss and write down:* What exact aspect of data needs modeling? * Answer this question clearly and precisely!Otherwise you will get lost (almost guaranteed) * Write everything down! * Also identify aspects of data that you do not want to address (yet)* Define an evaluation method! * How will you know your modeling is good? * E.g., comparison to specific data (quantitative method of comparison?)* For computational models: think of an experiment that could test your model * You essentially want your model to interface with this experiment, i.e. you want to simulate this experimentYou can find interesting questions by looking for phenomena that differ from your expectations. In *what* way does it differ? *How* could that be explained (starting to think about mechanistic questions and structural hypotheses)? *Why* could it be the way it is? What experiment could you design to investigate this phenomenon? What kind of data would you need? **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsQuestion is too general Remember: science advances one small step at the time. Get the small step right… Precise aspect of phenomenon you want to model is unclear You will fail to ask a meaningful question You have already chosen a toolkit This will prevent you from thinking deeply about the best way to answer your scientific question You don’t have a clear goal What do you want to get out of modeling? You don’t have a potential experiment in mind This will help concretize your objectives and think through the logic behind your goal **Note**The hardest part is Step 1. Once that is properly set up, all other should be easier. **BUT**: often you think that Step 1 is done only to figure out in later steps (anywhere really) that you were not as clear on your question and goal than you thought. Revisiting Step 1 is frequent necessity. Don't feel bad about it. You can revisit Step 1 later; for now, let's move on to the nest step. ---- Step 2: Understanding the state of the art & background Here you will do a literature review (**to be done AFTER this tutorial!**).
###Code
# @title Video 3: Literature Review & Background Knowledge
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1by4y1M7TZ", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d8zriLaMc14", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 2
from ipywidgets import widgets
from IPython.display import Markdown
import numpy as np
markdown1 = '''
## Step 2
<br>
<font size='3pt'>
You have learned all about the vestibular system in the Intro video. This is also where you would do a literature search to learn more about what's known about self-motion perception and vestibular signals. You would also want to examine any attempts to model self-motion, perceptual decision making and vestibular processing.</font>
'''
markdown21 = '''
## Step 2
<br>
<font size='3pt'>
While it seems a well-known fact that vestibular signals are noisy, we should check if we can also find this in the literature.
Let's also see what's in our data, there should be a 4d array called `spikes` that has spike counts (positive integers), a 2d array called `perception` with self-motion judgements (0=no motion or 1=motion). Let's see what this data looks like:
</font><br>
'''
markdown22 = '''
<br>
<font size='3pt'>
In the `spikes` array, we see our 3 acceleration conditions (first dimension), with 400 trials each (second dimensions) and simultaneous recordings from 40 neurons (third dimension), across 3 seconds in 10 ms bins (fourth dimension). The first two dimensions are also there in the `perception` array.
Perfect perception would have looked like [0, 1, 1]. The average judgements are far from correct (lots of self-motion illusions) but they do make some sense: it's closer to 0 in the no-motion condition and closer to 1 in both of the real-motion conditions.
The idea of our project is that the vestibular signals are noisy so that they might be mis-interpreted by the brain. Let's see if we can reproduce the stimuli from the data:
</font>
<br>
'''
markdown23 = '''
<br>
<font size='3pt'>
Blue is the no-motion condition, and produces flat average spike counts across the 3 s time interval. The orange and green line do show a bell-shaped curve that corresponds to the acceleration profile. But there also seems to be considerable noise: exactly what we need. Let's see what the spike trains for a single trial look like:
</font>
<br>
'''
markdown24 = '''
<br>
<font size='3pt'>
You can change the trial number in the bit of code above to compare what the rasterplots look like in different trials. You'll notice that they all look kind of the same: the 3 conditions are very hard (impossible?) to distinguish by eye-balling.
Now that we have seen the data, let's see if we can extract self-motion judgements from the spike counts.
</font>
<br>
'''
display(Markdown(r""))
markdown3 = '''
## Step 2
<br>
<font size='3pt'>
Most importantly, our literature review needs to address the following:
* what modeling approaches make it possible to classify time series data?
* how is human motion captured?
* what exactly is in the MoVi dataset?
* what is known regarding classification of human movement based on different measurements?
What we learn from the literature review is too long to write out here... But we would like to point out that human motion classification has been done; we're not proposing a very novel project here. But that's ok for an NMA project!
</font>
<br>
'''
out2 = widgets.Output()
with out2:
display(Markdown(markdown21))
print(f'The shape of `spikes` is: {np.shape(spikes)}')
print(f'The shape of `perception` is: {np.shape(perception)}')
print(f'The mean of `perception` is: {np.mean(perception, axis=1)}')
display(Markdown(markdown22))
for move_no in range(3):
plt.plot(np.arange(-1.5, 1.5 + (1/100),
(1/100)),
np.mean(np.mean(spikes[move_no, :, :, :],
axis=0),
axis=0),
label=['no motion', '$1 m/s^2$', '$2 m/s^2$'][move_no])
plt.xlabel('time [s]');
plt.ylabel('averaged spike counts');
plt.legend()
plt.show()
display(Markdown(markdown23))
for move in range(3):
rasterplot(spikes = spikes, movement = move, trial = 0)
plt.show()
display(Markdown(markdown24))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out3 = widgets.Output()
with out3:
display(Markdown(markdown3))
out = widgets.Tab([out1, out2, out3])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
out.set_title(2, 'Deep Learning')
display(out)
###Output
_____no_output_____
###Markdown
Modeling Steps 1 - 2**By Neuromatch Academy**__Content creators:__ Marius 't Hart, Megan Peters, Paul Schrater, Gunnar Blohm__Content reviewers:__ Eric DeWitt, Tara van Viegen, Marius Pachitariu__Production editors:__ Ella Batty, Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** **Note that this is the same as [NMA-CN W1D2 Tutorial 1](https://github.com/NeuromatchAcademy/course-content/blob/master/tutorials/W1D2_ModelingPractice/W1D2_Tutorial1.ipynb) - we provide it here as well for ease of access.** --- ObjectivesWe deconstruct the modeling process and break it down into 10 easy steps. Following the thought process of these steps will help you design and complete a Deep Learning (DL) project.We assume that you have a general idea of a project in mind, i.e., a preliminary question, goal, and/or phenomenon you would like to investigate. These 10 steps were originally developed for computational neuroscience models; but they really apply to any research project. We will now work through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)).We provide 3 example projects:* a neuro theory model (if you're comp neuro inclined) - this is also our roleplay example! See the corresponding notebook [here](https://github.com/NeuromatchAcademy/course-content-dl/blob/main/projects/modelingsteps/TrainIllusionModelingProjectDL.ipynb)* a brain decoding model (simple logistic regression; if your data science inclined)! See the corresponding notebook [here](https://github.com/NeuromatchAcademy/course-content-dl/blob/main/projects/modelingsteps/TrainIllusionDataProjectDL.ipynb)* a movement classification model (Convolutional Neural Network; if you're DL inclined)! See the corresponding notebook [here](https://github.com/NeuromatchAcademy/course-content-dl/blob/main/projects/modelingsteps/Example_Deep_Learning_Project.ipynb)
###Code
# @title Video 0: 10 steps overview
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1uw411R7RR", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="9gw2lmnHY54", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Tutorial slides
# @markdown These are the slides for the *DL projects intro*
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/wm2q3/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
--- Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
# for random distributions:
from scipy.stats import norm, poisson
# for logistic regression:
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
# @title Plotting Functions
def rasterplot(spikes,movement,trial):
[movements, trials, neurons, timepoints] = np.shape(spikes)
trial_spikes = spikes[movement,trial,:,:]
trial_events = [((trial_spikes[x,:] > 0).nonzero()[0]-150)/100 for x in range(neurons)]
plt.figure()
dt=1/100
plt.eventplot(trial_events, linewidths=1);
plt.title('movement: %d - trial: %d'%(movement, trial))
plt.ylabel('neuron')
plt.xlabel('time [s]')
def plotCrossValAccuracies(accuracies):
f, ax = plt.subplots(figsize=(8, 3))
ax.boxplot(accuracies, vert=False, widths=.7)
ax.scatter(accuracies, np.ones(8))
ax.set(
xlabel="Accuracy",
yticks=[],
title=f"Average test accuracy: {accuracies.mean():.2%}"
)
ax.spines["left"].set_visible(False)
# @title Generate Data
# @markdown `generateSpikeTrains(seed=37)`
# @markdown `subsetPerception(spikes, seed=0)`
def generateSpikeTrains(seed=37):
gain = 2
neurons = 50
movements = [0, 1, 2]
repetitions = 800
np.random.seed(seed)
# set up the basic parameters:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop + dt, dt) # a time interval
Velocity_sigma = 0.5 # std dev of the velocity profile
Velocity_Profile = norm.pdf(t, 0, Velocity_sigma)/norm.pdf(0, 0, Velocity_sigma) # The Gaussian velocity profile, normalized to a peak of 1
# set up the neuron properties:
Gains = np.random.rand(neurons) * gain # random sensitivity between 0 and `gain`
FRs = (np.random.rand(neurons) * 60 ) - 10 # random base firing rate between -10 and 50
# output matrix will have this shape:
target_shape = [len(movements), repetitions, neurons, len(Velocity_Profile)]
# build matrix for spikes, first, they depend on the velocity profile:
Spikes = np.repeat(Velocity_Profile.reshape([1, 1, 1, len(Velocity_Profile)]),
len(movements)*repetitions*neurons, axis=2).reshape(target_shape)
# multiplied by gains:
S_gains = np.repeat(np.repeat(Gains.reshape([1, 1, neurons]),
len(movements)*repetitions, axis=1).reshape(target_shape[:3]),
len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes * S_gains
# and multiplied by the movement:
S_moves = np.repeat( np.array(movements).reshape([len(movements), 1, 1, 1]),
repetitions*neurons*len(Velocity_Profile),
axis=3 ).reshape(target_shape)
Spikes = Spikes * S_moves
# on top of a baseline firing rate:
S_FR = np.repeat(np.repeat(FRs.reshape([1, 1, neurons]),
len(movements)*repetitions, axis=1).reshape(target_shape[:3]),
len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes + S_FR
# can not run the poisson random number generator on input lower than 0:
Spikes = np.where(Spikes < 0, 0, Spikes)
# so far, these were expected firing rates per second, correct for dt:
Spikes = poisson.rvs(Spikes * dt)
return Spikes
def subsetPerception(spikes, seed=0):
movements = [0, 1, 2]
split = 400
subset = 40
hwin = 3
[num_movements, repetitions, neurons, timepoints] = np.shape(spikes)
decision = np.zeros([num_movements, repetitions])
# ground truth for logistic regression:
y_train = np.repeat([0, 1, 1], split)
y_test = np.repeat([0, 1, 1], repetitions - split)
m_train = np.repeat(movements, split)
m_test = np.repeat(movements, split)
# reproduce the time points:
dt = 1 / 100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt)
w_idx = list((abs(t) < (hwin*dt)).nonzero()[0])
w_0 = min(w_idx)
w_1 = max(w_idx) + 1 # python...0
# get the total spike counts from stationary and movement trials:
spikes_stat = np.sum(spikes[0, :, :, :], axis=2)
spikes_move = np.sum(spikes[1:, :, :, :], axis=3)
train_spikes_stat = spikes_stat[:split, :]
train_spikes_move = spikes_move[:, :split, :].reshape([-1 ,neurons])
test_spikes_stat = spikes_stat[split:, :]
test_spikes_move = spikes_move[:, split:, :].reshape([-1, neurons])
# data to use to predict y:
x_train = np.concatenate((train_spikes_stat, train_spikes_move))
x_test = np.concatenate(( test_spikes_stat, test_spikes_move))
# this line creates a logistics regression model object, and immediately fits it:
population_model = LogisticRegression(solver='liblinear',
random_state=seed).fit(x_train, y_train)
# solver, one of: 'liblinear', 'newton-cg', 'lbfgs', 'sag', and 'saga'
# some of those require certain other options
# print(population_model.coef_) # slope
# print(population_model.intercept_) # intercept
ground_truth = np.array(population_model.predict(x_test))
ground_truth = ground_truth.reshape([3, -1])
output = {}
output['perception'] = ground_truth
output['spikes'] = spikes[:, split:, :subset, :]
return output
def getData():
spikes = generateSpikeTrains()
dataset = subsetPerception(spikes=spikes)
return dataset
dataset = getData()
perception = dataset['perception']
spikes = dataset['spikes']
###Output
_____no_output_____
###Markdown
DemosWe will demo the modeling process to you based on the train illusion. The introductory video will explain the phenomenon to you. Then we will do roleplay to showcase some common pitfalls to you based on a computational modeling project around the train illusion. In addition to the computational model, we will also provide a data neuroscience project example to you so you can appreciate similarities and differences. Enjoy! DisclaimerThe pitfalls roleplay videos were developed for a computational neuroscience modeling project. But all steps and pitfalls also apply to Deep Learning projects. There is a DL joke throughout these videos; that does NOT mean we do not like or appreciate DL (on the contrary, otherwise we would not be teaching it here). But it should serve as a warning that DL is not a magic answer to all questions... 😉
###Code
# @title Video 1: Introduction to tutorial
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Mf4y1b7xS", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GyGNs1fLIYQ", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
---- Step 1: Finding a phenomenon and a question to ask about it
###Code
# @title Video 2: Asking a question
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1VK4y1M7dc", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="4Gl8X_y_uoA", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 1
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people have the wrong percept. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We asked the following (arbitrary) question for our demo project: "How do noisy vestibular estimates of motion lead to illusory percepts of self motion?"
</font>
'''
markdown2 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people mix this up. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We assume that we have build the train illusion model (see the other example project colab). That model predicts that accumulated sensory evidence from vestibular signals determines the decision of whether self-motion is experienced or not. We now have vestibular neuron data (simulated in our case, but let's pretend) and would like to see if that prediction holds true.
The data contains *N* neurons and *M* trials for each of 3 motion conditions: no self-motion, slowly accelerating self-motion and faster accelerating self-motion. In our data,
*N* = 40 and *M* = 400.
**So we can ask the following question**: "Does accumulated vestibular neuron activity correlate with self-motion judgements?"
</font>
'''
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Asking your own question You should already have a project idea from your brainstorming yesterday. **Write down the phenomenon, question and goal(s) if you have them.** As a reminder, here is what you should discuss and write down:* What exact aspect of data needs modeling? * Answer this question clearly and precisely!Otherwise you will get lost (almost guaranteed) * Write everything down! * Also identify aspects of data that you do not want to address (yet)* Define an evaluation method! * How will you know your modeling is good? * E.g., comparison to specific data (quantitative method of comparison?)* For computational models: think of an experiment that could test your model * You essentially want your model to interface with this experiment, i.e. you want to simulate this experimentYou can find interesting questions by looking for phenomena that differ from your expectations. In *what* way does it differ? *How* could that be explained (starting to think about mechanistic questions and structural hypotheses)? *Why* could it be the way it is? What experiment could you design to investigate this phenomenon? What kind of data would you need? **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsQuestion is too general Remember: science advances one small step at the time. Get the small step right… Precise aspect of phenomenon you want to model is unclear You will fail to ask a meaningful question You have already chosen a toolkit This will prevent you from thinking deeply about the best way to answer your scientific question You don’t have a clear goal What do you want to get out of modeling? You don’t have a potential experiment in mind This will help concretize your objectives and think through the logic behind your goal **Note**The hardest part is Step 1. Once that is properly set up, all other should be easier. **BUT**: often you think that Step 1 is done only to figure out in later steps (anywhere really) that you were not as clear on your question and goal than you thought. Revisiting Step 1 is frequent necessity. Don't feel bad about it. You can revisit Step 1 later; for now, let's move on to the nest step. ---- Step 2: Understanding the state of the art & background Here you will do a literature review (**to be done AFTER this tutorial!**).
###Code
# @title Video 3: Literature Review & Background Knowledge
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1by4y1M7TZ", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d8zriLaMc14", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 2
from ipywidgets import widgets
from IPython.display import Markdown
import numpy as np
markdown1 = '''
## Step 2
<br>
<font size='3pt'>
You have learned all about the vestibular system in the Intro video. This is also where you would do a literature search to learn more about what's known about self-motion perception and vestibular signals. You would also want to examine any attempts to model self-motion, perceptual decision making and vestibular processing.</font>
'''
markdown21 = '''
## Step 2
<br>
<font size='3pt'>
While it seems a well-known fact that vestibular signals are noisy, we should check if we can also find this in the literature.
Let's also see what's in our data, there should be a 4d array called `spikes` that has spike counts (positive integers), a 2d array called `perception` with self-motion judgements (0=no motion or 1=motion). Let's see what this data looks like:
</font><br>
'''
markdown22 = '''
<br>
<font size='3pt'>
In the `spikes` array, we see our 3 acceleration conditions (first dimension), with 400 trials each (second dimensions) and simultaneous recordings from 40 neurons (third dimension), across 3 seconds in 10 ms bins (fourth dimension). The first two dimensions are also there in the `perception` array.
Perfect perception would have looked like [0, 1, 1]. The average judgements are far from correct (lots of self-motion illusions) but they do make some sense: it's closer to 0 in the no-motion condition and closer to 1 in both of the real-motion conditions.
The idea of our project is that the vestibular signals are noisy so that they might be mis-interpreted by the brain. Let's see if we can reproduce the stimuli from the data:
</font>
<br>
'''
markdown23 = '''
<br>
<font size='3pt'>
Blue is the no-motion condition, and produces flat average spike counts across the 3 s time interval. The orange and green line do show a bell-shaped curve that corresponds to the acceleration profile. But there also seems to be considerable noise: exactly what we need. Let's see what the spike trains for a single trial look like:
</font>
<br>
'''
markdown24 = '''
<br>
<font size='3pt'>
You can change the trial number in the bit of code above to compare what the rasterplots look like in different trials. You'll notice that they all look kind of the same: the 3 conditions are very hard (impossible?) to distinguish by eye-balling.
Now that we have seen the data, let's see if we can extract self-motion judgements from the spike counts.
</font>
<br>
'''
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown21))
print(f'The shape of `spikes` is: {np.shape(spikes)}')
print(f'The shape of `perception` is: {np.shape(perception)}')
print(f'The mean of `perception` is: {np.mean(perception, axis=1)}')
display(Markdown(markdown22))
for move_no in range(3):
plt.plot(np.arange(-1.5, 1.5 + (1/100),
(1/100)),
np.mean(np.mean(spikes[move_no, :, :, :],
axis=0),
axis=0),
label=['no motion', '$1 m/s^2$', '$2 m/s^2$'][move_no])
plt.xlabel('time [s]');
plt.ylabel('averaged spike counts');
plt.legend()
plt.show()
display(Markdown(markdown23))
for move in range(3):
rasterplot(spikes = spikes, movement = move, trial = 0)
plt.show()
display(Markdown(markdown24))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Modeling Steps 1 - 2**By Neuromatch Academy**__Content creators:__ Marius 't Hart, Megan Peters, Paul Schrater, Gunnar Blohm__Content reviewers:__ Eric DeWitt, Tara van Viegen, Marius Pachitariu__Production editors:__ Ella Batty, Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** **Note that this is the same as [NMA-CN W1D2 Tutorial 1](https://github.com/NeuromatchAcademy/course-content/blob/master/tutorials/W1D2_ModelingPractice/W1D2_Tutorial1.ipynb) - we provide it here as well for ease of access.** --- ObjectivesWe deconstruct the modeling process and break it down into 10 easy steps. Following the thought process of these steps will help you design and complete a Deep Learning (DL) project.We assume that you have a general idea of a project in mind, i.e., a preliminary question, goal, and/or phenomenon you would like to investigate. These 10 steps were originally developed for computational neuroscience models; but they really apply to any research project. We will now work through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)).We provide 3 example projects:* a computational neuroscience model (if you're comp neuro inclined) - this is also our roleplay example! LINKS!* a brain decoding model (simple logistic regression; if your data science inclined) LINKS!* a movement classification model (Convolutional Neural Network; if you're DL inclined) LINKS!
###Code
# @title Video 0: 10 steps overview
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="9gw2lmnHY54", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Tutorial slides
# @markdown These are the slides for the *DL projects intro*
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/wm2q3/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
--- Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
# for random distributions:
from scipy.stats import norm, poisson
# for logistic regression:
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
# @title Plotting Functions
def rasterplot(spikes,movement,trial):
[movements, trials, neurons, timepoints] = np.shape(spikes)
trial_spikes = spikes[movement,trial,:,:]
trial_events = [((trial_spikes[x,:] > 0).nonzero()[0]-150)/100 for x in range(neurons)]
plt.figure()
dt=1/100
plt.eventplot(trial_events, linewidths=1);
plt.title('movement: %d - trial: %d'%(movement, trial))
plt.ylabel('neuron')
plt.xlabel('time [s]')
def plotCrossValAccuracies(accuracies):
f, ax = plt.subplots(figsize=(8, 3))
ax.boxplot(accuracies, vert=False, widths=.7)
ax.scatter(accuracies, np.ones(8))
ax.set(
xlabel="Accuracy",
yticks=[],
title=f"Average test accuracy: {accuracies.mean():.2%}"
)
ax.spines["left"].set_visible(False)
# @title Generate Data
# @markdown `generateSpikeTrains(seed=37)`
# @markdown `subsetPerception(spikes, seed=0)`
def generateSpikeTrains(seed=37):
gain = 2
neurons = 50
movements = [0, 1, 2]
repetitions = 800
np.random.seed(seed)
# set up the basic parameters:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop + dt, dt) # a time interval
Velocity_sigma = 0.5 # std dev of the velocity profile
Velocity_Profile = norm.pdf(t, 0, Velocity_sigma)/norm.pdf(0, 0, Velocity_sigma) # The Gaussian velocity profile, normalized to a peak of 1
# set up the neuron properties:
Gains = np.random.rand(neurons) * gain # random sensitivity between 0 and `gain`
FRs = (np.random.rand(neurons) * 60 ) - 10 # random base firing rate between -10 and 50
# output matrix will have this shape:
target_shape = [len(movements), repetitions, neurons, len(Velocity_Profile)]
# build matrix for spikes, first, they depend on the velocity profile:
Spikes = np.repeat(Velocity_Profile.reshape([1, 1, 1, len(Velocity_Profile)]),
len(movements)*repetitions*neurons, axis=2).reshape(target_shape)
# multiplied by gains:
S_gains = np.repeat(np.repeat(Gains.reshape([1, 1, neurons]),
len(movements)*repetitions, axis=1).reshape(target_shape[:3]),
len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes * S_gains
# and multiplied by the movement:
S_moves = np.repeat( np.array(movements).reshape([len(movements), 1, 1, 1]),
repetitions*neurons*len(Velocity_Profile),
axis=3 ).reshape(target_shape)
Spikes = Spikes * S_moves
# on top of a baseline firing rate:
S_FR = np.repeat(np.repeat(FRs.reshape([1, 1, neurons]),
len(movements)*repetitions, axis=1).reshape(target_shape[:3]),
len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes + S_FR
# can not run the poisson random number generator on input lower than 0:
Spikes = np.where(Spikes < 0, 0, Spikes)
# so far, these were expected firing rates per second, correct for dt:
Spikes = poisson.rvs(Spikes * dt)
return Spikes
def subsetPerception(spikes, seed=0):
movements = [0, 1, 2]
split = 400
subset = 40
hwin = 3
[num_movements, repetitions, neurons, timepoints] = np.shape(spikes)
decision = np.zeros([num_movements, repetitions])
# ground truth for logistic regression:
y_train = np.repeat([0, 1, 1], split)
y_test = np.repeat([0, 1, 1], repetitions - split)
m_train = np.repeat(movements, split)
m_test = np.repeat(movements, split)
# reproduce the time points:
dt = 1 / 100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt)
w_idx = list((abs(t) < (hwin*dt)).nonzero()[0])
w_0 = min(w_idx)
w_1 = max(w_idx) + 1 # python...0
# get the total spike counts from stationary and movement trials:
spikes_stat = np.sum(spikes[0, :, :, :], axis=2)
spikes_move = np.sum(spikes[1:, :, :, :], axis=3)
train_spikes_stat = spikes_stat[:split, :]
train_spikes_move = spikes_move[:, :split, :].reshape([-1 ,neurons])
test_spikes_stat = spikes_stat[split:, :]
test_spikes_move = spikes_move[:, split:, :].reshape([-1, neurons])
# data to use to predict y:
x_train = np.concatenate((train_spikes_stat, train_spikes_move))
x_test = np.concatenate(( test_spikes_stat, test_spikes_move))
# this line creates a logistics regression model object, and immediately fits it:
population_model = LogisticRegression(solver='liblinear',
random_state=seed).fit(x_train, y_train)
# solver, one of: 'liblinear', 'newton-cg', 'lbfgs', 'sag', and 'saga'
# some of those require certain other options
# print(population_model.coef_) # slope
# print(population_model.intercept_) # intercept
ground_truth = np.array(population_model.predict(x_test))
ground_truth = ground_truth.reshape([3, -1])
output = {}
output['perception'] = ground_truth
output['spikes'] = spikes[:, split:, :subset, :]
return output
def getData():
spikes = generateSpikeTrains()
dataset = subsetPerception(spikes=spikes)
return dataset
dataset = getData()
perception = dataset['perception']
spikes = dataset['spikes']
###Output
_____no_output_____
###Markdown
DemosWe will demo the modeling process to you based on the train illusion. The introductory video will explain the phenomenon to you. Then we will do roleplay to showcase some common pitfalls to you based on a computational modeling project around the train illusion. In addition to the computational model, we will also provide a data neuroscience project example to you so you can appreciate similarities and differences. Enjoy! DisclaimerThe pitfalls roleplay videos were developed for a computational neuroscience modeling project. But all steps and pitfalls also apply to Deep Learning projects. There is a DL joke throughout these videos; that does NOT mean we do not like or appreciate DL (on the contrary, otherwise we would not be teaching it here). But it should serve as a warning that DL is not a magic answer to all questions... 😉
###Code
# @title Video 1: Introduction to tutorial
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Mf4y1b7xS", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GyGNs1fLIYQ", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
---- Step 1: Finding a phenomenon and a question to ask about it
###Code
# @title Video 2: Asking a question
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1VK4y1M7dc", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="4Gl8X_y_uoA", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 1
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people have the wrong percept. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We asked the following (arbitrary) question for our demo project: "How do noisy vestibular estimates of motion lead to illusory percepts of self motion?"
</font>
'''
markdown2 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people mix this up. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We assume that we have build the train illusion model (see the other example project colab). That model predicts that accumulated sensory evidence from vestibular signals determines the decision of whether self-motion is experienced or not. We now have vestibular neuron data (simulated in our case, but let's pretend) and would like to see if that prediction holds true.
The data contains *N* neurons and *M* trials for each of 3 motion conditions: no self-motion, slowly accelerating self-motion and faster accelerating self-motion. In our data,
*N* = 40 and *M* = 400.
**So we can ask the following question**: "Does accumulated vestibular neuron activity correlate with self-motion judgements?"
</font>
'''
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Asking your own question You should already have a project idea from your brainstorming yesterday. **Write down the phenomenon, question and goal(s) if you have them.** As a reminder, here is what you should discuss and write down:* What exact aspect of data needs modeling? * Answer this question clearly and precisely!Otherwise you will get lost (almost guaranteed) * Write everything down! * Also identify aspects of data that you do not want to address (yet)* Define an evaluation method! * How will you know your modeling is good? * E.g., comparison to specific data (quantitative method of comparison?)* For computational models: think of an experiment that could test your model * You essentially want your model to interface with this experiment, i.e. you want to simulate this experimentYou can find interesting questions by looking for phenomena that differ from your expectations. In *what* way does it differ? *How* could that be explained (starting to think about mechanistic questions and structural hypotheses)? *Why* could it be the way it is? What experiment could you design to investigate this phenomenon? What kind of data would you need? **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsQuestion is too general Remember: science advances one small step at the time. Get the small step right… Precise aspect of phenomenon you want to model is unclear You will fail to ask a meaningful question You have already chosen a toolkit This will prevent you from thinking deeply about the best way to answer your scientific question You don’t have a clear goal What do you want to get out of modeling? You don’t have a potential experiment in mind This will help concretize your objectives and think through the logic behind your goal **Note**The hardest part is Step 1. Once that is properly set up, all other should be easier. **BUT**: often you think that Step 1 is done only to figure out in later steps (anywhere really) that you were not as clear on your question and goal than you thought. Revisiting Step 1 is frequent necessity. Don't feel bad about it. You can revisit Step 1 later; for now, let's move on to the nest step. ---- Step 2: Understanding the state of the art & background Here you will do a literature review (**to be done AFTER this tutorial!**).
###Code
# @title Video 3: Literature Review & Background Knowledge
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1by4y1M7TZ", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d8zriLaMc14", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 2
from ipywidgets import widgets
from IPython.display import Markdown
import numpy as np
markdown1 = '''
## Step 2
<br>
<font size='3pt'>
You have learned all about the vestibular system in the Intro video. This is also where you would do a literature search to learn more about what's known about self-motion perception and vestibular signals. You would also want to examine any attempts to model self-motion, perceptual decision making and vestibular processing.</font>
'''
markdown21 = '''
## Step 2
<br>
<font size='3pt'>
While it seems a well-known fact that vestibular signals are noisy, we should check if we can also find this in the literature.
Let's also see what's in our data, there should be a 4d array called `spikes` that has spike counts (positive integers), a 2d array called `perception` with self-motion judgements (0=no motion or 1=motion). Let's see what this data looks like:
</font><br>
'''
markdown22 = '''
<br>
<font size='3pt'>
In the `spikes` array, we see our 3 acceleration conditions (first dimension), with 400 trials each (second dimensions) and simultaneous recordings from 40 neurons (third dimension), across 3 seconds in 10 ms bins (fourth dimension). The first two dimensions are also there in the `perception` array.
Perfect perception would have looked like [0, 1, 1]. The average judgements are far from correct (lots of self-motion illusions) but they do make some sense: it's closer to 0 in the no-motion condition and closer to 1 in both of the real-motion conditions.
The idea of our project is that the vestibular signals are noisy so that they might be mis-interpreted by the brain. Let's see if we can reproduce the stimuli from the data:
</font>
<br>
'''
markdown23 = '''
<br>
<font size='3pt'>
Blue is the no-motion condition, and produces flat average spike counts across the 3 s time interval. The orange and green line do show a bell-shaped curve that corresponds to the acceleration profile. But there also seems to be considerable noise: exactly what we need. Let's see what the spike trains for a single trial look like:
</font>
<br>
'''
markdown24 = '''
<br>
<font size='3pt'>
You can change the trial number in the bit of code above to compare what the rasterplots look like in different trials. You'll notice that they all look kind of the same: the 3 conditions are very hard (impossible?) to distinguish by eye-balling.
Now that we have seen the data, let's see if we can extract self-motion judgements from the spike counts.
</font>
<br>
'''
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown21))
print(f'The shape of `spikes` is: {np.shape(spikes)}')
print(f'The shape of `perception` is: {np.shape(perception)}')
print(f'The mean of `perception` is: {np.mean(perception, axis=1)}')
display(Markdown(markdown22))
for move_no in range(3):
plt.plot(np.arange(-1.5, 1.5 + (1/100),
(1/100)),
np.mean(np.mean(spikes[move_no, :, :, :],
axis=0),
axis=0),
label=['no motion', '$1 m/s^2$', '$2 m/s^2$'][move_no])
plt.xlabel('time [s]');
plt.ylabel('averaged spike counts');
plt.legend()
plt.show()
display(Markdown(markdown23))
for move in range(3):
rasterplot(spikes = spikes, movement = move, trial = 0)
plt.show()
display(Markdown(markdown24))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Modeling Steps 1 - 2**By Neuromatch Academy**__Content creators:__ Marius 't Hart, Megan Peters, Paul Schrater, Gunnar Blohm__Content reviewers:__ Eric DeWitt, Tara van Viegen, Marius Pachitariu__Production editors:__ Ella Batty, Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** **Note that this is the same as [NMA-CN W1D2 Tutorial 1](https://github.com/NeuromatchAcademy/course-content/blob/master/tutorials/W1D2_ModelingPractice/W1D2_Tutorial1.ipynb) - we provide it here as well for ease of access.** --- ObjectivesWe deconstruct the modeling process and break it down into 10 easy steps. Following the thought process of these steps will help you design and complete a Deep Learning (DL) project.We assume that you have a general idea of a project in mind, i.e., a preliminary question, goal, and/or phenomenon you would like to investigate. These 10 steps were originally developed for computational neuroscience models; but they really apply to any research project. We will now work through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)).We provide 3 example projects:* a neuro theory model (if you're comp neuro inclined) - this is also our roleplay example! See the corresponding notebook [here](https://github.com/NeuromatchAcademy/course-content-dl/blob/main/projects/modelingsteps/TrainIllusionModelingProjectDL.ipynb)* a brain decoding model (simple logistic regression; if your data science inclined)! See the corresponding notebook [here](https://github.com/NeuromatchAcademy/course-content-dl/blob/main/projects/modelingsteps/TrainIllusionDataProjectDL.ipynb)* a movement classification model (Convolutional Neural Network; if you're DL inclined)! See the corresponding notebook [here](https://github.com/NeuromatchAcademy/course-content-dl/blob/main/projects/modelingsteps/Example_Deep_Learning_Project.ipynb)
###Code
# @title Video 0: 10 steps overview
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1uw411R7RR", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="9gw2lmnHY54", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Tutorial slides
# @markdown These are the slides for the *DL projects intro*
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/wm2q3/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
--- Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
# for random distributions:
from scipy.stats import norm, poisson
# for logistic regression:
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
# @title Plotting Functions
def rasterplot(spikes,movement,trial):
[movements, trials, neurons, timepoints] = np.shape(spikes)
trial_spikes = spikes[movement,trial,:,:]
trial_events = [((trial_spikes[x,:] > 0).nonzero()[0]-150)/100 for x in range(neurons)]
plt.figure()
dt=1/100
plt.eventplot(trial_events, linewidths=1);
plt.title('movement: %d - trial: %d'%(movement, trial))
plt.ylabel('neuron')
plt.xlabel('time [s]')
def plotCrossValAccuracies(accuracies):
f, ax = plt.subplots(figsize=(8, 3))
ax.boxplot(accuracies, vert=False, widths=.7)
ax.scatter(accuracies, np.ones(8))
ax.set(
xlabel="Accuracy",
yticks=[],
title=f"Average test accuracy: {accuracies.mean():.2%}"
)
ax.spines["left"].set_visible(False)
# @title Generate Data
# @markdown `generateSpikeTrains(seed=37)`
# @markdown `subsetPerception(spikes, seed=0)`
def generateSpikeTrains(seed=37):
gain = 2
neurons = 50
movements = [0, 1, 2]
repetitions = 800
np.random.seed(seed)
# set up the basic parameters:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop + dt, dt) # a time interval
Velocity_sigma = 0.5 # std dev of the velocity profile
Velocity_Profile = norm.pdf(t, 0, Velocity_sigma)/norm.pdf(0, 0, Velocity_sigma) # The Gaussian velocity profile, normalized to a peak of 1
# set up the neuron properties:
Gains = np.random.rand(neurons) * gain # random sensitivity between 0 and `gain`
FRs = (np.random.rand(neurons) * 60 ) - 10 # random base firing rate between -10 and 50
# output matrix will have this shape:
target_shape = [len(movements), repetitions, neurons, len(Velocity_Profile)]
# build matrix for spikes, first, they depend on the velocity profile:
Spikes = np.repeat(Velocity_Profile.reshape([1, 1, 1, len(Velocity_Profile)]),
len(movements)*repetitions*neurons, axis=2).reshape(target_shape)
# multiplied by gains:
S_gains = np.repeat(np.repeat(Gains.reshape([1, 1, neurons]),
len(movements)*repetitions, axis=1).reshape(target_shape[:3]),
len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes * S_gains
# and multiplied by the movement:
S_moves = np.repeat( np.array(movements).reshape([len(movements), 1, 1, 1]),
repetitions*neurons*len(Velocity_Profile),
axis=3 ).reshape(target_shape)
Spikes = Spikes * S_moves
# on top of a baseline firing rate:
S_FR = np.repeat(np.repeat(FRs.reshape([1, 1, neurons]),
len(movements)*repetitions, axis=1).reshape(target_shape[:3]),
len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes + S_FR
# can not run the poisson random number generator on input lower than 0:
Spikes = np.where(Spikes < 0, 0, Spikes)
# so far, these were expected firing rates per second, correct for dt:
Spikes = poisson.rvs(Spikes * dt)
return Spikes
def subsetPerception(spikes, seed=0):
movements = [0, 1, 2]
split = 400
subset = 40
hwin = 3
[num_movements, repetitions, neurons, timepoints] = np.shape(spikes)
decision = np.zeros([num_movements, repetitions])
# ground truth for logistic regression:
y_train = np.repeat([0, 1, 1], split)
y_test = np.repeat([0, 1, 1], repetitions - split)
m_train = np.repeat(movements, split)
m_test = np.repeat(movements, split)
# reproduce the time points:
dt = 1 / 100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt)
w_idx = list((abs(t) < (hwin*dt)).nonzero()[0])
w_0 = min(w_idx)
w_1 = max(w_idx) + 1 # python...0
# get the total spike counts from stationary and movement trials:
spikes_stat = np.sum(spikes[0, :, :, :], axis=2)
spikes_move = np.sum(spikes[1:, :, :, :], axis=3)
train_spikes_stat = spikes_stat[:split, :]
train_spikes_move = spikes_move[:, :split, :].reshape([-1 ,neurons])
test_spikes_stat = spikes_stat[split:, :]
test_spikes_move = spikes_move[:, split:, :].reshape([-1, neurons])
# data to use to predict y:
x_train = np.concatenate((train_spikes_stat, train_spikes_move))
x_test = np.concatenate(( test_spikes_stat, test_spikes_move))
# this line creates a logistics regression model object, and immediately fits it:
population_model = LogisticRegression(solver='liblinear',
random_state=seed).fit(x_train, y_train)
# solver, one of: 'liblinear', 'newton-cg', 'lbfgs', 'sag', and 'saga'
# some of those require certain other options
# print(population_model.coef_) # slope
# print(population_model.intercept_) # intercept
ground_truth = np.array(population_model.predict(x_test))
ground_truth = ground_truth.reshape([3, -1])
output = {}
output['perception'] = ground_truth
output['spikes'] = spikes[:, split:, :subset, :]
return output
def getData():
spikes = generateSpikeTrains()
dataset = subsetPerception(spikes=spikes)
return dataset
dataset = getData()
perception = dataset['perception']
spikes = dataset['spikes']
###Output
_____no_output_____
###Markdown
DemosWe will demo the modeling process to you based on the train illusion. The introductory video will explain the phenomenon to you. Then we will do roleplay to showcase some common pitfalls to you based on a computational modeling project around the train illusion. In addition to the computational model, we will also provide a data neuroscience project example to you so you can appreciate similarities and differences. Enjoy! DisclaimerThe pitfalls roleplay videos were developed for a computational neuroscience modeling project. But all steps and pitfalls also apply to Deep Learning projects. There is a DL joke throughout these videos; that does NOT mean we do not like or appreciate DL (on the contrary, otherwise we would not be teaching it here). But it should serve as a warning that DL is not a magic answer to all questions... 😉
###Code
# @title Video 1: Introduction to tutorial
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Mf4y1b7xS", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GyGNs1fLIYQ", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
---- Step 1: Finding a phenomenon and a question to ask about it
###Code
# @title Video 2: Asking a question
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1VK4y1M7dc", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="4Gl8X_y_uoA", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 1
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people have the wrong percept. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We asked the following (arbitrary) question for our demo project: "How do noisy vestibular estimates of motion lead to illusory percepts of self motion?"
</font>
'''
markdown2 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people mix this up. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We assume that we have build the train illusion model (see the other example project colab). That model predicts that accumulated sensory evidence from vestibular signals determines the decision of whether self-motion is experienced or not. We now have vestibular neuron data (simulated in our case, but let's pretend) and would like to see if that prediction holds true.
The data contains *N* neurons and *M* trials for each of 3 motion conditions: no self-motion, slowly accelerating self-motion and faster accelerating self-motion. In our data,
*N* = 40 and *M* = 400.
**So we can ask the following question**: "Does accumulated vestibular neuron activity correlate with self-motion judgements?"
</font>
'''
markdown3 = '''
## Step 1
<br>
<font size='3pt'>
There are many different questions we could ask with the MoVi dataset. We will start with a simple question: "Can we classify movements from skeletal motion data, and if so, which body parts are the most informative ones?"
Our goal is to perform a pilot study to see if this is possible in principle. We will therefore use "ground truth" skeletal motion data that has been computed using an inference algorithm (see MoVi paper). If this works out, then as a next step we might want to use the raw sensor data or even videos...
The ultimate goal could for example be to figure out which body parts to record movements from (e.g. is just a wristband enough?) to classify movement.
</font>
'''
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out3 = widgets.Output()
with out3:
display(Markdown(markdown3))
out = widgets.Tab([out1, out2, out3])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
out.set_title(2, 'Deep Learning')
display(out)
###Output
_____no_output_____
###Markdown
Asking your own question You should already have a project idea from your brainstorming yesterday. **Write down the phenomenon, question and goal(s) if you have them.** As a reminder, here is what you should discuss and write down:* What exact aspect of data needs modeling? * Answer this question clearly and precisely!Otherwise you will get lost (almost guaranteed) * Write everything down! * Also identify aspects of data that you do not want to address (yet)* Define an evaluation method! * How will you know your modeling is good? * E.g., comparison to specific data (quantitative method of comparison?)* For computational models: think of an experiment that could test your model * You essentially want your model to interface with this experiment, i.e. you want to simulate this experimentYou can find interesting questions by looking for phenomena that differ from your expectations. In *what* way does it differ? *How* could that be explained (starting to think about mechanistic questions and structural hypotheses)? *Why* could it be the way it is? What experiment could you design to investigate this phenomenon? What kind of data would you need? **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsQuestion is too general Remember: science advances one small step at the time. Get the small step right… Precise aspect of phenomenon you want to model is unclear You will fail to ask a meaningful question You have already chosen a toolkit This will prevent you from thinking deeply about the best way to answer your scientific question You don’t have a clear goal What do you want to get out of modeling? You don’t have a potential experiment in mind This will help concretize your objectives and think through the logic behind your goal **Note**The hardest part is Step 1. Once that is properly set up, all other should be easier. **BUT**: often you think that Step 1 is done only to figure out in later steps (anywhere really) that you were not as clear on your question and goal than you thought. Revisiting Step 1 is frequent necessity. Don't feel bad about it. You can revisit Step 1 later; for now, let's move on to the nest step. ---- Step 2: Understanding the state of the art & background Here you will do a literature review (**to be done AFTER this tutorial!**).
###Code
# @title Video 3: Literature Review & Background Knowledge
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1by4y1M7TZ", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d8zriLaMc14", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 2
from ipywidgets import widgets
from IPython.display import Markdown
import numpy as np
markdown1 = '''
## Step 2
<br>
<font size='3pt'>
You have learned all about the vestibular system in the Intro video. This is also where you would do a literature search to learn more about what's known about self-motion perception and vestibular signals. You would also want to examine any attempts to model self-motion, perceptual decision making and vestibular processing.</font>
'''
markdown21 = '''
## Step 2
<br>
<font size='3pt'>
While it seems a well-known fact that vestibular signals are noisy, we should check if we can also find this in the literature.
Let's also see what's in our data, there should be a 4d array called `spikes` that has spike counts (positive integers), a 2d array called `perception` with self-motion judgements (0=no motion or 1=motion). Let's see what this data looks like:
</font><br>
'''
markdown22 = '''
<br>
<font size='3pt'>
In the `spikes` array, we see our 3 acceleration conditions (first dimension), with 400 trials each (second dimensions) and simultaneous recordings from 40 neurons (third dimension), across 3 seconds in 10 ms bins (fourth dimension). The first two dimensions are also there in the `perception` array.
Perfect perception would have looked like [0, 1, 1]. The average judgements are far from correct (lots of self-motion illusions) but they do make some sense: it's closer to 0 in the no-motion condition and closer to 1 in both of the real-motion conditions.
The idea of our project is that the vestibular signals are noisy so that they might be mis-interpreted by the brain. Let's see if we can reproduce the stimuli from the data:
</font>
<br>
'''
markdown23 = '''
<br>
<font size='3pt'>
Blue is the no-motion condition, and produces flat average spike counts across the 3 s time interval. The orange and green line do show a bell-shaped curve that corresponds to the acceleration profile. But there also seems to be considerable noise: exactly what we need. Let's see what the spike trains for a single trial look like:
</font>
<br>
'''
markdown24 = '''
<br>
<font size='3pt'>
You can change the trial number in the bit of code above to compare what the rasterplots look like in different trials. You'll notice that they all look kind of the same: the 3 conditions are very hard (impossible?) to distinguish by eye-balling.
Now that we have seen the data, let's see if we can extract self-motion judgements from the spike counts.
</font>
<br>
'''
display(Markdown(r""))
markdown3 = '''
## Step 2
<br>
<font size='3pt'>
Most importantly, our literature review needs to address the following:
* what modeling approaches make it possible to classify time series data?
* how is human motion captured?
* what exactly is in the MoVi dataset?
* what is known regarding classification of human movement based on different measurements?
What we learn from the literature review is too long to write out here... But we would like to point out that human motion classification has been done; we're not proposing a very novel project here. But that's ok for an NMA project!
</font>
<br>
'''
out2 = widgets.Output()
with out2:
display(Markdown(markdown21))
print(f'The shape of `spikes` is: {np.shape(spikes)}')
print(f'The shape of `perception` is: {np.shape(perception)}')
print(f'The mean of `perception` is: {np.mean(perception, axis=1)}')
display(Markdown(markdown22))
for move_no in range(3):
plt.plot(np.arange(-1.5, 1.5 + (1/100),
(1/100)),
np.mean(np.mean(spikes[move_no, :, :, :],
axis=0),
axis=0),
label=['no motion', '$1 m/s^2$', '$2 m/s^2$'][move_no])
plt.xlabel('time [s]');
plt.ylabel('averaged spike counts');
plt.legend()
plt.show()
display(Markdown(markdown23))
for move in range(3):
rasterplot(spikes = spikes, movement = move, trial = 0)
plt.show()
display(Markdown(markdown24))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out3 = widgets.Output()
with out3:
display(Markdown(markdown3))
out = widgets.Tab([out1, out2, out3])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
out.set_title(2, 'Deep Learning')
display(out)
###Output
_____no_output_____
###Markdown
Modeling Steps 1 - 2**By Neuromatch Academy**__Content creators:__ Marius 't Hart, Megan Peters, Paul Schrater, Gunnar Blohm__Content reviewers:__ Eric DeWitt, Tara van Viegen, Marius Pachitariu__Production editors:__ Ella Batty, Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** **Note that this is the same as [NMA-CN W1D2 Tutorial 1](https://github.com/NeuromatchAcademy/course-content/blob/master/tutorials/W1D2_ModelingPractice/W1D2_Tutorial1.ipynb) - we provide it here as well for ease of access.** --- ObjectivesWe deconstruct the modeling process and break it down into 10 easy steps. Following the thought process of these steps will help you design and complete a Deep Learning (DL) project.We assume that you have a general idea of a project in mind, i.e., a preliminary question, goal, and/or phenomenon you would like to investigate. These 10 steps were originally developed for computational neuroscience models; but they really apply to any research project. We will now work through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)).We provide 3 example projects:* a neuro theory model (if you're comp neuro inclined) - this is also our roleplay example! See the corresponding notebook [here](https://github.com/NeuromatchAcademy/course-content-dl/blob/main/projects/modelingsteps/TrainIllusionModelingProjectDL.ipynb)* a brain decoding model (simple logistic regression; if your data science inclined)! See the corresponding notebook [here](https://github.com/NeuromatchAcademy/course-content-dl/blob/main/projects/modelingsteps/TrainIllusionDataProjectDL.ipynb)* a movement classification model (Convolutional Neural Network; if you're DL inclined)! See the corresponding notebook [here](https://github.com/NeuromatchAcademy/course-content-dl/blob/main/projects/modelingsteps/Example_Deep_Learning_Project.ipynb)
###Code
# @title Video 0: 10 steps overview
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1uw411R7RR", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="9gw2lmnHY54", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Tutorial slides
# @markdown These are the slides for the *DL projects intro*
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/wm2q3/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
--- Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
# for random distributions:
from scipy.stats import norm, poisson
# for logistic regression:
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
# @title Plotting Functions
def rasterplot(spikes,movement,trial):
[movements, trials, neurons, timepoints] = np.shape(spikes)
trial_spikes = spikes[movement,trial,:,:]
trial_events = [((trial_spikes[x,:] > 0).nonzero()[0]-150)/100 for x in range(neurons)]
plt.figure()
dt=1/100
plt.eventplot(trial_events, linewidths=1);
plt.title('movement: %d - trial: %d'%(movement, trial))
plt.ylabel('neuron')
plt.xlabel('time [s]')
def plotCrossValAccuracies(accuracies):
f, ax = plt.subplots(figsize=(8, 3))
ax.boxplot(accuracies, vert=False, widths=.7)
ax.scatter(accuracies, np.ones(8))
ax.set(
xlabel="Accuracy",
yticks=[],
title=f"Average test accuracy: {accuracies.mean():.2%}"
)
ax.spines["left"].set_visible(False)
# @title Generate Data
# @markdown `generateSpikeTrains(seed=37)`
# @markdown `subsetPerception(spikes, seed=0)`
def generateSpikeTrains(seed=37):
gain = 2
neurons = 50
movements = [0, 1, 2]
repetitions = 800
np.random.seed(seed)
# set up the basic parameters:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop + dt, dt) # a time interval
Velocity_sigma = 0.5 # std dev of the velocity profile
Velocity_Profile = norm.pdf(t, 0, Velocity_sigma)/norm.pdf(0, 0, Velocity_sigma) # The Gaussian velocity profile, normalized to a peak of 1
# set up the neuron properties:
Gains = np.random.rand(neurons) * gain # random sensitivity between 0 and `gain`
FRs = (np.random.rand(neurons) * 60 ) - 10 # random base firing rate between -10 and 50
# output matrix will have this shape:
target_shape = [len(movements), repetitions, neurons, len(Velocity_Profile)]
# build matrix for spikes, first, they depend on the velocity profile:
Spikes = np.repeat(Velocity_Profile.reshape([1, 1, 1, len(Velocity_Profile)]),
len(movements)*repetitions*neurons, axis=2).reshape(target_shape)
# multiplied by gains:
S_gains = np.repeat(np.repeat(Gains.reshape([1, 1, neurons]),
len(movements)*repetitions, axis=1).reshape(target_shape[:3]),
len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes * S_gains
# and multiplied by the movement:
S_moves = np.repeat( np.array(movements).reshape([len(movements), 1, 1, 1]),
repetitions*neurons*len(Velocity_Profile),
axis=3 ).reshape(target_shape)
Spikes = Spikes * S_moves
# on top of a baseline firing rate:
S_FR = np.repeat(np.repeat(FRs.reshape([1, 1, neurons]),
len(movements)*repetitions, axis=1).reshape(target_shape[:3]),
len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes + S_FR
# can not run the poisson random number generator on input lower than 0:
Spikes = np.where(Spikes < 0, 0, Spikes)
# so far, these were expected firing rates per second, correct for dt:
Spikes = poisson.rvs(Spikes * dt)
return Spikes
def subsetPerception(spikes, seed=0):
movements = [0, 1, 2]
split = 400
subset = 40
hwin = 3
[num_movements, repetitions, neurons, timepoints] = np.shape(spikes)
decision = np.zeros([num_movements, repetitions])
# ground truth for logistic regression:
y_train = np.repeat([0, 1, 1], split)
y_test = np.repeat([0, 1, 1], repetitions - split)
m_train = np.repeat(movements, split)
m_test = np.repeat(movements, split)
# reproduce the time points:
dt = 1 / 100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt)
w_idx = list((abs(t) < (hwin*dt)).nonzero()[0])
w_0 = min(w_idx)
w_1 = max(w_idx) + 1 # python...0
# get the total spike counts from stationary and movement trials:
spikes_stat = np.sum(spikes[0, :, :, :], axis=2)
spikes_move = np.sum(spikes[1:, :, :, :], axis=3)
train_spikes_stat = spikes_stat[:split, :]
train_spikes_move = spikes_move[:, :split, :].reshape([-1 ,neurons])
test_spikes_stat = spikes_stat[split:, :]
test_spikes_move = spikes_move[:, split:, :].reshape([-1, neurons])
# data to use to predict y:
x_train = np.concatenate((train_spikes_stat, train_spikes_move))
x_test = np.concatenate(( test_spikes_stat, test_spikes_move))
# this line creates a logistics regression model object, and immediately fits it:
population_model = LogisticRegression(solver='liblinear',
random_state=seed).fit(x_train, y_train)
# solver, one of: 'liblinear', 'newton-cg', 'lbfgs', 'sag', and 'saga'
# some of those require certain other options
# print(population_model.coef_) # slope
# print(population_model.intercept_) # intercept
ground_truth = np.array(population_model.predict(x_test))
ground_truth = ground_truth.reshape([3, -1])
output = {}
output['perception'] = ground_truth
output['spikes'] = spikes[:, split:, :subset, :]
return output
def getData():
spikes = generateSpikeTrains()
dataset = subsetPerception(spikes=spikes)
return dataset
dataset = getData()
perception = dataset['perception']
spikes = dataset['spikes']
###Output
_____no_output_____
###Markdown
DemosWe will demo the modeling process to you based on the train illusion. The introductory video will explain the phenomenon to you. Then we will do roleplay to showcase some common pitfalls to you based on a computational modeling project around the train illusion. In addition to the computational model, we will also provide a data neuroscience project example to you so you can appreciate similarities and differences. Enjoy! DisclaimerThe pitfalls roleplay videos were developed for a computational neuroscience modeling project. But all steps and pitfalls also apply to Deep Learning projects. There is a DL joke throughout these videos; that does NOT mean we do not like or appreciate DL (on the contrary, otherwise we would not be teaching it here). But it should serve as a warning that DL is not a magic answer to all questions... 😉
###Code
# @title Video 1: Introduction to tutorial
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Mf4y1b7xS", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GyGNs1fLIYQ", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
---- Step 1: Finding a phenomenon and a question to ask about it
###Code
# @title Video 2: Asking a question
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1VK4y1M7dc", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="4Gl8X_y_uoA", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 1
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people have the wrong percept. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We asked the following (arbitrary) question for our demo project: "How do noisy vestibular estimates of motion lead to illusory percepts of self motion?"
</font>
'''
markdown2 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people mix this up. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We assume that we have build the train illusion model (see the other example project colab). That model predicts that accumulated sensory evidence from vestibular signals determines the decision of whether self-motion is experienced or not. We now have vestibular neuron data (simulated in our case, but let's pretend) and would like to see if that prediction holds true.
The data contains *N* neurons and *M* trials for each of 3 motion conditions: no self-motion, slowly accelerating self-motion and faster accelerating self-motion. In our data,
*N* = 40 and *M* = 400.
**So we can ask the following question**: "Does accumulated vestibular neuron activity correlate with self-motion judgements?"
</font>
'''
markdown3 = '''
## Step 1
<br>
<font size='3pt'>
There are many different questions we could ask with the MoVi dataset. We will start with a simple question: "Can we classify movements from skeletal motion data, and if so, which body parts are the most informative ones?"
Our goal is to perform a pilot study to see if this is possible in principle. We will therefore use "ground truth" skeletal motion data that has been computed using an inference algorithm (see MoVi paper). If this works out, then as a next step we might want to use the raw sensor data or even videos...
The ultimate goal could for example be to figure out which body parts to record movements from (e.g. is just a wristband enough?) to classify movement.
</font>
'''
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out3 = widgets.Output()
with out3:
display(Markdown(markdown3))
out = widgets.Tab([out1, out2, out3])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
out.set_title(2, 'Deep Learning')
display(out)
###Output
_____no_output_____
###Markdown
Asking your own question You should already have a project idea from your brainstorming yesterday. **Write down the phenomenon, question and goal(s) if you have them.** As a reminder, here is what you should discuss and write down:* What exact aspect of data needs modeling? * Answer this question clearly and precisely!Otherwise you will get lost (almost guaranteed) * Write everything down! * Also identify aspects of data that you do not want to address (yet)* Define an evaluation method! * How will you know your modeling is good? * E.g., comparison to specific data (quantitative method of comparison?)* For computational models: think of an experiment that could test your model * You essentially want your model to interface with this experiment, i.e. you want to simulate this experimentYou can find interesting questions by looking for phenomena that differ from your expectations. In *what* way does it differ? *How* could that be explained (starting to think about mechanistic questions and structural hypotheses)? *Why* could it be the way it is? What experiment could you design to investigate this phenomenon? What kind of data would you need? **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsQuestion is too general Remember: science advances one small step at the time. Get the small step right… Precise aspect of phenomenon you want to model is unclear You will fail to ask a meaningful question You have already chosen a toolkit This will prevent you from thinking deeply about the best way to answer your scientific question You don’t have a clear goal What do you want to get out of modeling? You don’t have a potential experiment in mind This will help concretize your objectives and think through the logic behind your goal **Note**The hardest part is Step 1. Once that is properly set up, all other should be easier. **BUT**: often you think that Step 1 is done only to figure out in later steps (anywhere really) that you were not as clear on your question and goal than you thought. Revisiting Step 1 is frequent necessity. Don't feel bad about it. You can revisit Step 1 later; for now, let's move on to the nest step. ---- Step 2: Understanding the state of the art & background Here you will do a literature review (**to be done AFTER this tutorial!**).
###Code
# @title Video 3: Literature Review & Background Knowledge
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1by4y1M7TZ", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d8zriLaMc14", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 2
from ipywidgets import widgets
from IPython.display import Markdown
import numpy as np
markdown1 = '''
## Step 2
<br>
<font size='3pt'>
You have learned all about the vestibular system in the Intro video. This is also where you would do a literature search to learn more about what's known about self-motion perception and vestibular signals. You would also want to examine any attempts to model self-motion, perceptual decision making and vestibular processing.</font>
'''
markdown21 = '''
## Step 2
<br>
<font size='3pt'>
While it seems a well-known fact that vestibular signals are noisy, we should check if we can also find this in the literature.
Let's also see what's in our data, there should be a 4d array called `spikes` that has spike counts (positive integers), a 2d array called `perception` with self-motion judgements (0=no motion or 1=motion). Let's see what this data looks like:
</font><br>
'''
markdown22 = '''
<br>
<font size='3pt'>
In the `spikes` array, we see our 3 acceleration conditions (first dimension), with 400 trials each (second dimensions) and simultaneous recordings from 40 neurons (third dimension), across 3 seconds in 10 ms bins (fourth dimension). The first two dimensions are also there in the `perception` array.
Perfect perception would have looked like [0, 1, 1]. The average judgements are far from correct (lots of self-motion illusions) but they do make some sense: it's closer to 0 in the no-motion condition and closer to 1 in both of the real-motion conditions.
The idea of our project is that the vestibular signals are noisy so that they might be mis-interpreted by the brain. Let's see if we can reproduce the stimuli from the data:
</font>
<br>
'''
markdown23 = '''
<br>
<font size='3pt'>
Blue is the no-motion condition, and produces flat average spike counts across the 3 s time interval. The orange and green line do show a bell-shaped curve that corresponds to the acceleration profile. But there also seems to be considerable noise: exactly what we need. Let's see what the spike trains for a single trial look like:
</font>
<br>
'''
markdown24 = '''
<br>
<font size='3pt'>
You can change the trial number in the bit of code above to compare what the rasterplots look like in different trials. You'll notice that they all look kind of the same: the 3 conditions are very hard (impossible?) to distinguish by eye-balling.
Now that we have seen the data, let's see if we can extract self-motion judgements from the spike counts.
</font>
<br>
'''
display(Markdown(r""))
markdown3 = '''
## Step 2
<br>
<font size='3pt'>
Most importantly, our literature review needs to address the following:
* what modeling approaches make it possible to classify time series data?
* how is human motion captured?
* what exactly is in the MoVi dataset?
* what is known regarding classification of human movement based on different measurements?
What we learn from the literature review is too long to write out here... But we would like to point out that human motion classification has been done; we're not proposing a very novel project here. But that's ok for an NMA project!
</font>
<br>
'''
out2 = widgets.Output()
with out2:
display(Markdown(markdown21))
print(f'The shape of `spikes` is: {np.shape(spikes)}')
print(f'The shape of `perception` is: {np.shape(perception)}')
print(f'The mean of `perception` is: {np.mean(perception, axis=1)}')
display(Markdown(markdown22))
for move_no in range(3):
plt.plot(np.arange(-1.5, 1.5 + (1/100),
(1/100)),
np.mean(np.mean(spikes[move_no, :, :, :],
axis=0),
axis=0),
label=['no motion', '$1 m/s^2$', '$2 m/s^2$'][move_no])
plt.xlabel('time [s]');
plt.ylabel('averaged spike counts');
plt.legend()
plt.show()
display(Markdown(markdown23))
for move in range(3):
rasterplot(spikes = spikes, movement = move, trial = 0)
plt.show()
display(Markdown(markdown24))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out3 = widgets.Output()
with out3:
display(Markdown(markdown3))
out = widgets.Tab([out1, out2, out3])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
out.set_title(2, 'Deep Learning')
display(out)
###Output
_____no_output_____ |
Yeast.ipynb | ###Markdown
Using R in Teaching from *Network Science* Amir Barghi, Department of Mathematics and Statistics, Saint Michael's College---- Yeast Protein Interaction Network Loading Packages
###Code
library(tidyverse)
library(igraph)
library(igraphdata)
library(ggraph)
library(latex2exp)
###Output
_____no_output_____
###Markdown
Loading the Data Set Data from [`igraphdata::yeast`](https://github.com/igraph/igraphdata)Data Source: von Mering, C., Krause, R., Snel, B. et al. Comparative assessment of large-scale data sets of protein–protein interactions. *Nature* **417**, 399–403 (2002). https://doi.org/10.1038/nature750
###Code
data(yeast)
g <- yeast
V(g)
E(g)
components(g)$no
components(g)$csize
glimpse(vertex_attr(g))
glimpse(edge_attr(g))
vertex_attr(g, name = 'Class')[1:10]
edge_attr(g, name = 'Confidence')[1:10]
###Output
_____no_output_____
###Markdown
Visualizing the Yeast Network
###Code
set.seed(42)
ggraph(g, layout = 'lgl') +
geom_edge_fan(edge_linetype = 3, color = 'dark blue', alpha = 0.25) +
geom_node_point(color = 'dark red', size = 1, alpha = 0.75) +
theme_graph(base_family = 'Helvetica') +
labs(title = 'Yeast Interaction Network',
subtitle = 'Displayed Using Layout Generator for Larger Graphs')
set.seed(42)
ggraph(g, layout = 'drl') +
geom_edge_fan(edge_linetype = 3, color = 'dark blue', alpha = 0.25) +
geom_node_point(color = 'dark red', size = 1, alpha = 0.75) +
theme_graph(base_family = 'Helvetica') +
labs(title = 'Yeast Interaction Network',
subtitle = 'Displayed Using Distributed Recursive Layout')
set.seed(42)
ggraph(g, layout = 'mds') +
geom_edge_fan(edge_linetype = 3, color = 'dark blue', alpha = 0.25) +
geom_node_point(color = 'dark red', size = 1, alpha = 0.75) +
theme_graph(base_family = 'Helvetica') +
labs(title = 'Yeast Interaction Network',
subtitle = 'Displayed Using Multidimensional Scaling Layout')
###Output
_____no_output_____
###Markdown
Summary Statistics of the Yeast Network
###Code
suppressMessages(df <- bind_cols(enframe(eccentricity(g)),
enframe(betweenness(g)),
enframe(degree(g)),
enframe(transitivity(g, type = c('local')))))
df <- df %>% select(name...1, value...2, value...4, value...6, value...8)
names(df) <- c('name', 'eccentricity', 'betweenness', 'degree', 'clustering')
head(df)
tail(df)
glimpse(df)
df %>%
summarize(avg_deg = mean(degree),
delta = max(degree),
prop = sum(degree <= avg_deg) / n(),
diam = max(eccentricity),
radius = min(eccentricity),
avg_cc = mean(clustering, na.rm = TRUE),
avg_distance = mean_distance(g, directed = FALSE, unconnected = TRUE))
(d <- mean_distance(g, directed = FALSE, unconnected = TRUE))
mean(distances(g))
###Output
_____no_output_____
###Markdown
Fig. 2.18(a) on p. 66
###Code
distance_table(g)
D <- data.frame(1:length(distance_table(g)$res),
distance_table(g)$res / sum(distance_table(g)$res))
names(D) <- c('x', 'y')
D %>%
ggplot(aes(x = x, y = y)) +
geom_point() +
geom_line(aes(x = d), color = 'blue') +
labs(title = 'Distribution of Distance (Proportions) in the Yeast Network') +
labs(x = 'distance', y = 'density')
###Output
_____no_output_____
###Markdown
The Degree Distribution
###Code
df %>%
ggplot(aes(x = degree, y = ..density..)) +
geom_density(fill = 'red') +
labs(title = 'KDE of Degrees in the Yeast Network')
df %>%
ggplot(aes(x = degree, y = ..density..)) +
geom_histogram(binwidth = 1, fill = 'blue') +
labs(title = 'Histogram of Degrees in the Yeast Network')
df %>%
filter(degree <= 20) %>%
ggplot(aes(x = degree, y = ..density..)) +
geom_density(fill = 'red') +
labs(title = 'KDE of Degrees in the Yeast Network',
subtitle = TeX('for Nodes with Degree $\\leq 20$'))
df %>%
filter(degree <= 20) %>%
ggplot(aes(x = degree, y = ..density..)) +
geom_histogram(binwidth = 1, fill = 'blue') +
labs(title = 'Histogram of Degrees in the Yeast Network',
subtitle = TeX('for Nodes with Degree $\\leq 20$'))
###Output
_____no_output_____
###Markdown
Fig. 2.18(b) on p. 66
###Code
df %>%
group_by(degree) %>%
summarise(cc_deg = mean(clustering, na.rm = TRUE)) %>%
ungroup() %>%
ggplot(aes(x = degree, y = cc_deg)) +
geom_point(na.rm = TRUE, color = 'blue') +
scale_x_log10() +
scale_y_log10() +
labs(title = 'Relation Between Local Clustering Coefficient and Degree',
subtitle = 'in the Yeast Network') +
labs(x = TeX('$p_k$'), y = TeX('$C_k$'))
###Output
_____no_output_____
###Markdown
Local Clustering Coefficient Distribution
###Code
df %>%
ggplot(aes(x = clustering, y = ..density..)) +
geom_density(fill = 'red', na.rm = TRUE) +
labs(title = 'KDE of Local Clustering Coefficients in the Yeast Network')
df %>%
ggplot(aes(x = clustering, y = ..density..)) +
geom_histogram(binwidth = .1, fill = 'blue', na.rm = TRUE) +
labs(title = 'Histogram of Local Clustering Coefficients in the Yeast Network')
log(gorder(g)) / log(mean(df$degree))
mean_distance(g, directed = FALSE, unconnected = TRUE)
diameter(g)
C <- mean(df$clustering, na.rm = TRUE)
M <- mean(df$degree)
df %>%
group_by(degree) %>%
summarise(cc_deg = mean(clustering)) %>%
ungroup()
###Output
_____no_output_____
###Markdown
Fig. 3.13(d) on p. 96
###Code
df %>%
group_by(degree) %>%
summarise(cc_deg = mean(clustering)) %>%
ggplot(aes(x = degree, y = cc_deg)) +
geom_point(na.rm = TRUE, color = 'blue') +
geom_line(aes(y = C), color = 'blue') +
geom_line(aes(y = M / gorder(g)), color = 'red') +
scale_x_log10() +
scale_y_log10() +
labs(title = 'Relation Between Local Clustering Coefficient and Degree',
subtitle = 'The blue line is the average local clustering coefficient; \nthe red one is the one predicted by the random model.') +
labs(x = 'k', y = TeX('$C(k)$'))
###Output
_____no_output_____
###Markdown
Visualizing Other Relations with Degree
###Code
df %>%
ggplot(aes(x = degree, y = betweenness)) +
geom_point(na.rm = TRUE, size = 0.5, color = 'red') +
labs(title = 'Relationship Between Betweenness Centrality and Degree')
df %>%
ggplot(aes(x = degree, y = betweenness + 0.00000001)) +
geom_point(na.rm = TRUE, size = 0.5, color = 'red') +
scale_y_log10() +
labs(title = TeX('Relationship Between $\\log_{10}$ of Betweenness Centrality and Degree')) +
labs(y = '$\\log_{10}$(betweenness)')
df %>%
filter(betweenness > 0) %>%
ggplot(aes(x = degree, y = betweenness)) +
geom_point(na.rm = TRUE, size = 0.5, color = 'red') +
scale_y_log10() +
labs(title = TeX('Relationship Between $\\log_{10}$ of Betweenness Centrality and Degree')) +
labs(y = TeX('$\\log_{10}$(betweenness)'))
df %>%
ggplot(aes(x = degree, y = eccentricity)) +
geom_point(na.rm = TRUE, size = 0.5, color = 'orange') +
labs(title = 'Relationship Between Eccentricity and Degree')
df %>%
ggplot(aes(x = degree, y = clustering)) +
geom_point(na.rm = TRUE, size = 0.5, color = 'blue') +
labs(title = 'Relationship Between Local Clustering Coefficient and Degree')
###Output
_____no_output_____ |
extra_capsnets.ipynb | ###Markdown
**Capsule Networks (CapsNets)** *Based on the paper: [Dynamic Routing Between Capsules](https://arxiv.org/abs/1710.09829), by Sara Sabour, Nicholas Frosst and Geoffrey E. Hinton (NIPS 2017).* *Inspired in part from Huadong Liao's implementation: [CapsNet-TensorFlow](https://github.com/naturomics/CapsNet-Tensorflow).* Run in Google Colab **Warning**: this is the code for the 1st edition of the book. Please visit https://github.com/ageron/handson-ml2 for the 2nd edition code, with up-to-date notebooks using the latest library versions. In particular, the 1st edition is based on TensorFlow 1, while the 2nd edition uses TensorFlow 2, which is much simpler to use. Introduction Watch [this video](https://youtu.be/pPN8d0E3900) to understand the key ideas behind Capsule Networks:
###Code
from IPython.display import IFrame
IFrame(src="https://www.youtube.com/embed/pPN8d0E3900", width=560, height=315, frameborder=0, allowfullscreen=True)
###Output
_____no_output_____
###Markdown
You may also want to watch [this video](https://youtu.be/2Kawrd5szHE), which presents the main difficulties in this notebook:
###Code
IFrame(src="https://www.youtube.com/embed/2Kawrd5szHE", width=560, height=315, frameborder=0, allowfullscreen=True)
###Output
_____no_output_____
###Markdown
Imports To support both Python 2 and Python 3:
###Code
from __future__ import division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
To plot pretty figures:
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
We will need NumPy and TensorFlow:
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 1.x
except Exception:
pass
import numpy as np
import tensorflow as tf
###Output
/Users/ageron/.virtualenvs/ml/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(*args, **kwds)
###Markdown
Reproducibility Let's reset the default graph, in case you re-run this notebook without restarting the kernel:
###Code
tf.reset_default_graph()
###Output
_____no_output_____
###Markdown
Let's set the random seeds so that this notebook always produces the same output:
###Code
np.random.seed(42)
tf.set_random_seed(42)
###Output
_____no_output_____
###Markdown
Load MNIST Yes, I know, it's MNIST again. But hopefully this powerful idea will work as well on larger datasets, time will tell.
###Code
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
###Output
Extracting /tmp/data/train-images-idx3-ubyte.gz
Extracting /tmp/data/train-labels-idx1-ubyte.gz
Extracting /tmp/data/t10k-images-idx3-ubyte.gz
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
###Markdown
Let's look at what these hand-written digit images look like:
###Code
n_samples = 5
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
sample_image = mnist.train.images[index].reshape(28, 28)
plt.imshow(sample_image, cmap="binary")
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
And these are the corresponding labels:
###Code
mnist.train.labels[:n_samples]
###Output
_____no_output_____
###Markdown
Now let's build a Capsule Network to classify these images. Here's the overall architecture, enjoy the ASCII art! ;-)Note: for readability, I left out two arrows: Labels → Mask, and Input Images → Reconstruction Loss. ``` Loss ↑ ┌─────────┴─────────┐ Labels → Margin Loss Reconstruction Loss ↑ ↑ Length Decoder ↑ ↑ Digit Capsules ────Mask────┘ ↖↑↗ ↖↑↗ ↖↑↗ Primary Capsules ↑ Input Images``` We are going to build the graph starting from the bottom layer, and gradually move up, left side first. Let's go! Input Images Let's start by creating a placeholder for the input images (28×28 pixels, 1 color channel = grayscale).
###Code
X = tf.placeholder(shape=[None, 28, 28, 1], dtype=tf.float32, name="X")
###Output
_____no_output_____
###Markdown
Primary Capsules The first layer will be composed of 32 maps of 6×6 capsules each, where each capsule will output an 8D activation vector:
###Code
caps1_n_maps = 32
caps1_n_caps = caps1_n_maps * 6 * 6 # 1152 primary capsules
caps1_n_dims = 8
###Output
_____no_output_____
###Markdown
To compute their outputs, we first apply two regular convolutional layers:
###Code
conv1_params = {
"filters": 256,
"kernel_size": 9,
"strides": 1,
"padding": "valid",
"activation": tf.nn.relu,
}
conv2_params = {
"filters": caps1_n_maps * caps1_n_dims, # 256 convolutional filters
"kernel_size": 9,
"strides": 2,
"padding": "valid",
"activation": tf.nn.relu
}
conv1 = tf.layers.conv2d(X, name="conv1", **conv1_params)
conv2 = tf.layers.conv2d(conv1, name="conv2", **conv2_params)
###Output
_____no_output_____
###Markdown
Note: since we used a kernel size of 9 and no padding (for some reason, that's what `"valid"` means), the image shrunk by 9-1=8 pixels after each convolutional layer (28×28 to 20×20, then 20×20 to 12×12), and since we used a stride of 2 in the second convolutional layer, the image size was divided by 2. This is how we end up with 6×6 feature maps. Next, we reshape the output to get a bunch of 8D vectors representing the outputs of the primary capsules. The output of `conv2` is an array containing 32×8=256 feature maps for each instance, where each feature map is 6×6. So the shape of this output is (_batch size_, 6, 6, 256). We want to chop the 256 into 32 vectors of 8 dimensions each. We could do this by reshaping to (_batch size_, 6, 6, 32, 8). However, since this first capsule layer will be fully connected to the next capsule layer, we can simply flatten the 6×6 grids. This means we just need to reshape to (_batch size_, 6×6×32, 8).
###Code
caps1_raw = tf.reshape(conv2, [-1, caps1_n_caps, caps1_n_dims],
name="caps1_raw")
###Output
_____no_output_____
###Markdown
Now we need to squash these vectors. Let's define the `squash()` function, based on equation (1) from the paper:$\operatorname{squash}(\mathbf{s}) = \dfrac{\|\mathbf{s}\|^2}{1 + \|\mathbf{s}\|^2} \dfrac{\mathbf{s}}{\|\mathbf{s}\|}$The `squash()` function will squash all vectors in the given array, along the given axis (by default, the last axis).**Caution**, a nasty bug is waiting to bite you: the derivative of $\|\mathbf{s}\|$ is undefined when $\|\mathbf{s}\|=0$, so we can't just use `tf.norm()`, or else it will blow up during training: if a vector is zero, the gradients will be `nan`, so when the optimizer updates the variables, they will also become `nan`, and from then on you will be stuck in `nan` land. The solution is to implement the norm manually by computing the square root of the sum of squares plus a tiny epsilon value: $\|\mathbf{s}\| \approx \sqrt{\sum\limits_i{{s_i}^2}\,\,+ \epsilon}$.
###Code
def squash(s, axis=-1, epsilon=1e-7, name=None):
with tf.name_scope(name, default_name="squash"):
squared_norm = tf.reduce_sum(tf.square(s), axis=axis,
keep_dims=True)
safe_norm = tf.sqrt(squared_norm + epsilon)
squash_factor = squared_norm / (1. + squared_norm)
unit_vector = s / safe_norm
return squash_factor * unit_vector
###Output
_____no_output_____
###Markdown
Now let's apply this function to get the output $\mathbf{u}_i$ of each primary capsules $i$ :
###Code
caps1_output = squash(caps1_raw, name="caps1_output")
###Output
_____no_output_____
###Markdown
Great! We have the output of the first capsule layer. It wasn't too hard, was it? However, computing the next layer is where the fun really begins. Digit Capsules To compute the output of the digit capsules, we must first compute the predicted output vectors (one for each primary / digit capsule pair). Then we can run the routing by agreement algorithm. Compute the Predicted Output Vectors The digit capsule layer contains 10 capsules (one for each digit) of 16 dimensions each:
###Code
caps2_n_caps = 10
caps2_n_dims = 16
###Output
_____no_output_____
###Markdown
For each capsule $i$ in the first layer, we want to predict the output of every capsule $j$ in the second layer. For this, we will need a transformation matrix $\mathbf{W}_{i,j}$ (one for each pair of capsules ($i$, $j$)), then we can compute the predicted output $\hat{\mathbf{u}}_{j|i} = \mathbf{W}_{i,j} \, \mathbf{u}_i$ (equation (2)-right in the paper). Since we want to transform an 8D vector into a 16D vector, each transformation matrix $\mathbf{W}_{i,j}$ must have a shape of (16, 8). To compute $\hat{\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$), we will use a nice feature of the `tf.matmul()` function: you probably know that it lets you multiply two matrices, but you may not know that it also lets you multiply higher dimensional arrays. It treats the arrays as arrays of matrices, and it performs itemwise matrix multiplication. For example, suppose you have two 4D arrays, each containing a 2×3 grid of matrices. The first contains matrices $\mathbf{A}, \mathbf{B}, \mathbf{C}, \mathbf{D}, \mathbf{E}, \mathbf{F}$ and the second contains matrices $\mathbf{G}, \mathbf{H}, \mathbf{I}, \mathbf{J}, \mathbf{K}, \mathbf{L}$. If you multiply these two 4D arrays using the `tf.matmul()` function, this is what you get:$\pmatrix{\mathbf{A} & \mathbf{B} & \mathbf{C} \\\mathbf{D} & \mathbf{E} & \mathbf{F}} \times\pmatrix{\mathbf{G} & \mathbf{H} & \mathbf{I} \\\mathbf{J} & \mathbf{K} & \mathbf{L}} = \pmatrix{\mathbf{AG} & \mathbf{BH} & \mathbf{CI} \\\mathbf{DJ} & \mathbf{EK} & \mathbf{FL}}$ We can apply this function to compute $\hat{\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$) like this (recall that there are 6×6×32=1152 capsules in the first layer, and 10 in the second layer):$\pmatrix{ \mathbf{W}_{1,1} & \mathbf{W}_{1,2} & \cdots & \mathbf{W}_{1,10} \\ \mathbf{W}_{2,1} & \mathbf{W}_{2,2} & \cdots & \mathbf{W}_{2,10} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{W}_{1152,1} & \mathbf{W}_{1152,2} & \cdots & \mathbf{W}_{1152,10}} \times\pmatrix{ \mathbf{u}_1 & \mathbf{u}_1 & \cdots & \mathbf{u}_1 \\ \mathbf{u}_2 & \mathbf{u}_2 & \cdots & \mathbf{u}_2 \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{u}_{1152} & \mathbf{u}_{1152} & \cdots & \mathbf{u}_{1152}}=\pmatrix{\hat{\mathbf{u}}_{1|1} & \hat{\mathbf{u}}_{2|1} & \cdots & \hat{\mathbf{u}}_{10|1} \\\hat{\mathbf{u}}_{1|2} & \hat{\mathbf{u}}_{2|2} & \cdots & \hat{\mathbf{u}}_{10|2} \\\vdots & \vdots & \ddots & \vdots \\\hat{\mathbf{u}}_{1|1152} & \hat{\mathbf{u}}_{2|1152} & \cdots & \hat{\mathbf{u}}_{10|1152}}$ The shape of the first array is (1152, 10, 16, 8), and the shape of the second array is (1152, 10, 8, 1). Note that the second array must contain 10 identical copies of the vectors $\mathbf{u}_1$ to $\mathbf{u}_{1152}$. To create this array, we will use the handy `tf.tile()` function, which lets you create an array containing many copies of a base array, tiled in any way you want. Oh, wait a second! We forgot one dimension: _batch size_. Say we feed 50 images to the capsule network, it will make predictions for these 50 images simultaneously. So the shape of the first array must be (50, 1152, 10, 16, 8), and the shape of the second array must be (50, 1152, 10, 8, 1). The first layer capsules actually already output predictions for all 50 images, so the second array will be fine, but for the first array, we will need to use `tf.tile()` to have 50 copies of the transformation matrices. Okay, let's start by creating a trainable variable of shape (1, 1152, 10, 16, 8) that will hold all the transformation matrices. The first dimension of size 1 will make this array easy to tile. We initialize this variable randomly using a normal distribution with a standard deviation to 0.1.
###Code
init_sigma = 0.1
W_init = tf.random_normal(
shape=(1, caps1_n_caps, caps2_n_caps, caps2_n_dims, caps1_n_dims),
stddev=init_sigma, dtype=tf.float32, name="W_init")
W = tf.Variable(W_init, name="W")
###Output
_____no_output_____
###Markdown
Now we can create the first array by repeating `W` once per instance:
###Code
batch_size = tf.shape(X)[0]
W_tiled = tf.tile(W, [batch_size, 1, 1, 1, 1], name="W_tiled")
###Output
_____no_output_____
###Markdown
That's it! On to the second array, now. As discussed earlier, we need to create an array of shape (_batch size_, 1152, 10, 8, 1), containing the output of the first layer capsules, repeated 10 times (once per digit, along the third dimension, which is axis=2). The `caps1_output` array has a shape of (_batch size_, 1152, 8), so we first need to expand it twice, to get an array of shape (_batch size_, 1152, 1, 8, 1), then we can repeat it 10 times along the third dimension:
###Code
caps1_output_expanded = tf.expand_dims(caps1_output, -1,
name="caps1_output_expanded")
caps1_output_tile = tf.expand_dims(caps1_output_expanded, 2,
name="caps1_output_tile")
caps1_output_tiled = tf.tile(caps1_output_tile, [1, 1, caps2_n_caps, 1, 1],
name="caps1_output_tiled")
###Output
_____no_output_____
###Markdown
Let's check the shape of the first array:
###Code
W_tiled
###Output
_____no_output_____
###Markdown
Good, and now the second:
###Code
caps1_output_tiled
###Output
_____no_output_____
###Markdown
Yes! Now, to get all the predicted output vectors $\hat{\mathbf{u}}_{j|i}$, we just need to multiply these two arrays using `tf.matmul()`, as explained earlier:
###Code
caps2_predicted = tf.matmul(W_tiled, caps1_output_tiled,
name="caps2_predicted")
###Output
_____no_output_____
###Markdown
Let's check the shape:
###Code
caps2_predicted
###Output
_____no_output_____
###Markdown
Perfect, for each instance in the batch (we don't know the batch size yet, hence the "?") and for each pair of first and second layer capsules (1152×10) we have a 16D predicted output column vector (16×1). We're ready to apply the routing by agreement algorithm! Routing by agreement First let's initialize the raw routing weights $b_{i,j}$ to zero:
###Code
raw_weights = tf.zeros([batch_size, caps1_n_caps, caps2_n_caps, 1, 1],
dtype=np.float32, name="raw_weights")
###Output
_____no_output_____
###Markdown
We will see why we need the last two dimensions of size 1 in a minute. Round 1 First, let's apply the softmax function to compute the routing weights, $\mathbf{c}_{i} = \operatorname{softmax}(\mathbf{b}_i)$ (equation (3) in the paper):
###Code
routing_weights = tf.nn.softmax(raw_weights, dim=2, name="routing_weights")
###Output
_____no_output_____
###Markdown
Now let's compute the weighted sum of all the predicted output vectors for each second-layer capsule, $\mathbf{s}_j = \sum\limits_{i}{c_{i,j}\hat{\mathbf{u}}_{j|i}}$ (equation (2)-left in the paper):
###Code
weighted_predictions = tf.multiply(routing_weights, caps2_predicted,
name="weighted_predictions")
weighted_sum = tf.reduce_sum(weighted_predictions, axis=1, keep_dims=True,
name="weighted_sum")
###Output
_____no_output_____
###Markdown
There are a couple important details to note here:* To perform elementwise matrix multiplication (also called the Hadamard product, noted $\circ$), we use the `tf.multiply()` function. It requires `routing_weights` and `caps2_predicted` to have the same rank, which is why we added two extra dimensions of size 1 to `routing_weights`, earlier.* The shape of `routing_weights` is (_batch size_, 1152, 10, 1, 1) while the shape of `caps2_predicted` is (_batch size_, 1152, 10, 16, 1). Since they don't match on the fourth dimension (1 _vs_ 16), `tf.multiply()` automatically _broadcasts_ the `routing_weights` 16 times along that dimension. If you are not familiar with broadcasting, a simple example might help: $ \pmatrix{1 & 2 & 3 \\ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000} = \pmatrix{1 & 2 & 3 \\ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000 \\ 10 & 100 & 1000} = \pmatrix{10 & 200 & 3000 \\ 40 & 500 & 6000} $ And finally, let's apply the squash function to get the outputs of the second layer capsules at the end of the first iteration of the routing by agreement algorithm, $\mathbf{v}_j = \operatorname{squash}(\mathbf{s}_j)$ :
###Code
caps2_output_round_1 = squash(weighted_sum, axis=-2,
name="caps2_output_round_1")
caps2_output_round_1
###Output
_____no_output_____
###Markdown
Good! We have ten 16D output vectors for each instance, as expected. Round 2 First, let's measure how close each predicted vector $\hat{\mathbf{u}}_{j|i}$ is to the actual output vector $\mathbf{v}_j$ by computing their scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$. * Quick math reminder: if $\vec{a}$ and $\vec{b}$ are two vectors of equal length, and $\mathbf{a}$ and $\mathbf{b}$ are their corresponding column vectors (i.e., matrices with a single column), then $\mathbf{a}^T \mathbf{b}$ (i.e., the matrix multiplication of the transpose of $\mathbf{a}$, and $\mathbf{b}$) is a 1×1 matrix containing the scalar product of the two vectors $\vec{a}\cdot\vec{b}$. In Machine Learning, we generally represent vectors as column vectors, so when we talk about computing the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$, this actually means computing ${\hat{\mathbf{u}}_{j|i}}^T \mathbf{v}_j$. Since we need to compute the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ for each instance, and for each pair of first and second level capsules $(i, j)$, we will once again take advantage of the fact that `tf.matmul()` can multiply many matrices simultaneously. This will require playing around with `tf.tile()` to get all dimensions to match (except for the last 2), just like we did earlier. So let's look at the shape of `caps2_predicted`, which holds all the predicted output vectors $\hat{\mathbf{u}}_{j|i}$ for each instance and each pair of capsules:
###Code
caps2_predicted
###Output
_____no_output_____
###Markdown
And now let's look at the shape of `caps2_output_round_1`, which holds 10 outputs vectors of 16D each, for each instance:
###Code
caps2_output_round_1
###Output
_____no_output_____
###Markdown
To get these shapes to match, we just need to tile the `caps2_output_round_1` array 1152 times (once per primary capsule) along the second dimension:
###Code
caps2_output_round_1_tiled = tf.tile(
caps2_output_round_1, [1, caps1_n_caps, 1, 1, 1],
name="caps2_output_round_1_tiled")
###Output
_____no_output_____
###Markdown
And now we are ready to call `tf.matmul()` (note that we must tell it to transpose the matrices in the first array, to get ${\hat{\mathbf{u}}_{j|i}}^T$ instead of $\hat{\mathbf{u}}_{j|i}$):
###Code
agreement = tf.matmul(caps2_predicted, caps2_output_round_1_tiled,
transpose_a=True, name="agreement")
###Output
_____no_output_____
###Markdown
We can now update the raw routing weights $b_{i,j}$ by simply adding the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ we just computed: $b_{i,j} \gets b_{i,j} + \hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ (see Procedure 1, step 7, in the paper).
###Code
raw_weights_round_2 = tf.add(raw_weights, agreement,
name="raw_weights_round_2")
###Output
_____no_output_____
###Markdown
The rest of round 2 is the same as in round 1:
###Code
routing_weights_round_2 = tf.nn.softmax(raw_weights_round_2,
dim=2,
name="routing_weights_round_2")
weighted_predictions_round_2 = tf.multiply(routing_weights_round_2,
caps2_predicted,
name="weighted_predictions_round_2")
weighted_sum_round_2 = tf.reduce_sum(weighted_predictions_round_2,
axis=1, keep_dims=True,
name="weighted_sum_round_2")
caps2_output_round_2 = squash(weighted_sum_round_2,
axis=-2,
name="caps2_output_round_2")
###Output
_____no_output_____
###Markdown
We could go on for a few more rounds, by repeating exactly the same steps as in round 2, but to keep things short, we will stop here:
###Code
caps2_output = caps2_output_round_2
###Output
_____no_output_____
###Markdown
Static or Dynamic Loop? In the code above, we created different operations in the TensorFlow graph for each round of the routing by agreement algorithm. In other words, it's a static loop.Sure, instead of copy/pasting the code several times, we could have written a `for` loop in Python, but this would not change the fact that the graph would end up containing different operations for each routing iteration. It's actually okay since we generally want less than 5 routing iterations, so the graph won't grow too big.However, you may prefer to implement the routing loop within the TensorFlow graph itself rather than using a Python `for` loop. To do this, you would need to use TensorFlow's `tf.while_loop()` function. This way, all routing iterations would reuse the same operations in the graph, it would be a dynamic loop.For example, here is how to build a small loop that computes the sum of squares from 1 to 100:
###Code
def condition(input, counter):
return tf.less(counter, 100)
def loop_body(input, counter):
output = tf.add(input, tf.square(counter))
return output, tf.add(counter, 1)
with tf.name_scope("compute_sum_of_squares"):
counter = tf.constant(1)
sum_of_squares = tf.constant(0)
result = tf.while_loop(condition, loop_body, [sum_of_squares, counter])
with tf.Session() as sess:
print(sess.run(result))
###Output
(328350, 100)
###Markdown
As you can see, the `tf.while_loop()` function expects the loop condition and body to be provided _via_ two functions. These functions will be called only once by TensorFlow, during the graph construction phase, _not_ while executing the graph. The `tf.while_loop()` function stitches together the graph fragments created by `condition()` and `loop_body()` with some additional operations to create the loop.Also note that during training, TensorFlow will automagically handle backpropogation through the loop, so you don't need to worry about that. Of course, we could have used this one-liner instead! ;-)
###Code
sum([i**2 for i in range(1, 100 + 1)])
###Output
_____no_output_____
###Markdown
Joke aside, apart from reducing the graph size, using a dynamic loop instead of a static loop can help reduce how much GPU RAM you use (if you are using a GPU). Indeed, if you set `swap_memory=True` when calling the `tf.while_loop()` function, TensorFlow will automatically check GPU RAM usage at each loop iteration, and it will take care of swapping memory between the GPU and the CPU when needed. Since CPU memory is much cheaper and abundant than GPU RAM, this can really make a big difference. Estimated Class Probabilities (Length) The lengths of the output vectors represent the class probabilities, so we could just use `tf.norm()` to compute them, but as we saw when discussing the squash function, it would be risky, so instead let's create our own `safe_norm()` function:
###Code
def safe_norm(s, axis=-1, epsilon=1e-7, keep_dims=False, name=None):
with tf.name_scope(name, default_name="safe_norm"):
squared_norm = tf.reduce_sum(tf.square(s), axis=axis,
keep_dims=keep_dims)
return tf.sqrt(squared_norm + epsilon)
y_proba = safe_norm(caps2_output, axis=-2, name="y_proba")
###Output
_____no_output_____
###Markdown
To predict the class of each instance, we can just select the one with the highest estimated probability. To do this, let's start by finding its index using `tf.argmax()`:
###Code
y_proba_argmax = tf.argmax(y_proba, axis=2, name="y_proba")
###Output
_____no_output_____
###Markdown
Let's look at the shape of `y_proba_argmax`:
###Code
y_proba_argmax
###Output
_____no_output_____
###Markdown
That's what we wanted: for each instance, we now have the index of the longest output vector. Let's get rid of the last two dimensions by using `tf.squeeze()` which removes dimensions of size 1. This gives us the capsule network's predicted class for each instance:
###Code
y_pred = tf.squeeze(y_proba_argmax, axis=[1,2], name="y_pred")
y_pred
###Output
_____no_output_____
###Markdown
Okay, we are now ready to define the training operations, starting with the losses. Labels First, we will need a placeholder for the labels:
###Code
y = tf.placeholder(shape=[None], dtype=tf.int64, name="y")
###Output
_____no_output_____
###Markdown
Margin loss The paper uses a special margin loss to make it possible to detect two or more different digits in each image:$ L_k = T_k \max(0, m^{+} - \|\mathbf{v}_k\|)^2 + \lambda (1 - T_k) \max(0, \|\mathbf{v}_k\| - m^{-})^2$* $T_k$ is equal to 1 if the digit of class $k$ is present, or 0 otherwise.* In the paper, $m^{+} = 0.9$, $m^{-} = 0.1$ and $\lambda = 0.5$.* Note that there was an error in the video (at 15:47): the max operations are squared, not the norms. Sorry about that.
###Code
m_plus = 0.9
m_minus = 0.1
lambda_ = 0.5
###Output
_____no_output_____
###Markdown
Since `y` will contain the digit classes, from 0 to 9, to get $T_k$ for every instance and every class, we can just use the `tf.one_hot()` function:
###Code
T = tf.one_hot(y, depth=caps2_n_caps, name="T")
###Output
_____no_output_____
###Markdown
A small example should make it clear what this does:
###Code
with tf.Session():
print(T.eval(feed_dict={y: np.array([0, 1, 2, 3, 9])}))
###Output
[[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]
###Markdown
Now let's compute the norm of the output vector for each output capsule and each instance. First, let's verify the shape of `caps2_output`:
###Code
caps2_output
###Output
_____no_output_____
###Markdown
The 16D output vectors are in the second to last dimension, so let's use the `safe_norm()` function with `axis=-2`:
###Code
caps2_output_norm = safe_norm(caps2_output, axis=-2, keep_dims=True,
name="caps2_output_norm")
###Output
_____no_output_____
###Markdown
Now let's compute $\max(0, m^{+} - \|\mathbf{v}_k\|)^2$, and reshape the result to get a simple matrix of shape (_batch size_, 10):
###Code
present_error_raw = tf.square(tf.maximum(0., m_plus - caps2_output_norm),
name="present_error_raw")
present_error = tf.reshape(present_error_raw, shape=(-1, 10),
name="present_error")
###Output
_____no_output_____
###Markdown
Next let's compute $\max(0, \|\mathbf{v}_k\| - m^{-})^2$ and reshape it:
###Code
absent_error_raw = tf.square(tf.maximum(0., caps2_output_norm - m_minus),
name="absent_error_raw")
absent_error = tf.reshape(absent_error_raw, shape=(-1, 10),
name="absent_error")
###Output
_____no_output_____
###Markdown
We are ready to compute the loss for each instance and each digit:
###Code
L = tf.add(T * present_error, lambda_ * (1.0 - T) * absent_error,
name="L")
###Output
_____no_output_____
###Markdown
Now we can sum the digit losses for each instance ($L_0 + L_1 + \cdots + L_9$), and compute the mean over all instances. This gives us the final margin loss:
###Code
margin_loss = tf.reduce_mean(tf.reduce_sum(L, axis=1), name="margin_loss")
###Output
_____no_output_____
###Markdown
Reconstruction Now let's add a decoder network on top of the capsule network. It is a regular 3-layer fully connected neural network which will learn to reconstruct the input images based on the output of the capsule network. This will force the capsule network to preserve all the information required to reconstruct the digits, across the whole network. This constraint regularizes the model: it reduces the risk of overfitting the training set, and it helps generalize to new digits. Mask The paper mentions that during training, instead of sending all the outputs of the capsule network to the decoder network, we must send only the output vector of the capsule that corresponds to the target digit. All the other output vectors must be masked out. At inference time, we must mask all output vectors except for the longest one, i.e., the one that corresponds to the predicted digit. You can see this in the paper's figure 2 (at 18:15 in the video): all output vectors are masked out, except for the reconstruction target's output vector. We need a placeholder to tell TensorFlow whether we want to mask the output vectors based on the labels (`True`) or on the predictions (`False`, the default):
###Code
mask_with_labels = tf.placeholder_with_default(False, shape=(),
name="mask_with_labels")
###Output
_____no_output_____
###Markdown
Now let's use `tf.cond()` to define the reconstruction targets as the labels `y` if `mask_with_labels` is `True`, or `y_pred` otherwise.
###Code
reconstruction_targets = tf.cond(mask_with_labels, # condition
lambda: y, # if True
lambda: y_pred, # if False
name="reconstruction_targets")
###Output
_____no_output_____
###Markdown
Note that the `tf.cond()` function expects the if-True and if-False tensors to be passed _via_ functions: these functions will be called just once during the graph construction phase (not during the execution phase), similar to `tf.while_loop()`. This allows TensorFlow to add the necessary operations to handle the conditional evaluation of the if-True or if-False tensors. However, in our case, the tensors `y` and `y_pred` are already created by the time we call `tf.cond()`, so unfortunately TensorFlow will consider both `y` and `y_pred` to be dependencies of the `reconstruction_targets` tensor. The `reconstruction_targets` tensor will end up with the correct value, but:1. whenever we evaluate a tensor that depends on `reconstruction_targets`, the `y_pred` tensor will be evaluated (even if `mask_with_layers` is `True`). This is not a big deal because computing `y_pred` adds no computing overhead during training, since we need it anyway to compute the margin loss. And during testing, if we are doing classification, we won't need reconstructions, so `reconstruction_targets` won't be evaluated at all.2. we will always need to feed a value for the `y` placeholder (even if `mask_with_layers` is `False`). This is a bit annoying, but we can pass an empty array, because TensorFlow won't use it anyway (it just does not know it yet when it checks for dependencies). Now that we have the reconstruction targets, let's create the reconstruction mask. It should be equal to 1.0 for the target class, and 0.0 for the other classes, for each instance. For this we can just use the `tf.one_hot()` function:
###Code
reconstruction_mask = tf.one_hot(reconstruction_targets,
depth=caps2_n_caps,
name="reconstruction_mask")
###Output
_____no_output_____
###Markdown
Let's check the shape of `reconstruction_mask`:
###Code
reconstruction_mask
###Output
_____no_output_____
###Markdown
Let's compare this to the shape of `caps2_output`:
###Code
caps2_output
###Output
_____no_output_____
###Markdown
Mmh, its shape is (_batch size_, 1, 10, 16, 1). We want to multiply it by the `reconstruction_mask`, but the shape of the `reconstruction_mask` is (_batch size_, 10). We must reshape it to (_batch size_, 1, 10, 1, 1) to make multiplication possible:
###Code
reconstruction_mask_reshaped = tf.reshape(
reconstruction_mask, [-1, 1, caps2_n_caps, 1, 1],
name="reconstruction_mask_reshaped")
###Output
_____no_output_____
###Markdown
At last! We can apply the mask:
###Code
caps2_output_masked = tf.multiply(
caps2_output, reconstruction_mask_reshaped,
name="caps2_output_masked")
caps2_output_masked
###Output
_____no_output_____
###Markdown
One last reshape operation to flatten the decoder's inputs:
###Code
decoder_input = tf.reshape(caps2_output_masked,
[-1, caps2_n_caps * caps2_n_dims],
name="decoder_input")
###Output
_____no_output_____
###Markdown
This gives us an array of shape (_batch size_, 160):
###Code
decoder_input
###Output
_____no_output_____
###Markdown
Decoder Now let's build the decoder. It's quite simple: two dense (fully connected) ReLU layers followed by a dense output sigmoid layer:
###Code
n_hidden1 = 512
n_hidden2 = 1024
n_output = 28 * 28
with tf.name_scope("decoder"):
hidden1 = tf.layers.dense(decoder_input, n_hidden1,
activation=tf.nn.relu,
name="hidden1")
hidden2 = tf.layers.dense(hidden1, n_hidden2,
activation=tf.nn.relu,
name="hidden2")
decoder_output = tf.layers.dense(hidden2, n_output,
activation=tf.nn.sigmoid,
name="decoder_output")
###Output
_____no_output_____
###Markdown
Reconstruction Loss Now let's compute the reconstruction loss. It is just the squared difference between the input image and the reconstructed image:
###Code
X_flat = tf.reshape(X, [-1, n_output], name="X_flat")
squared_difference = tf.square(X_flat - decoder_output,
name="squared_difference")
reconstruction_loss = tf.reduce_mean(squared_difference,
name="reconstruction_loss")
###Output
_____no_output_____
###Markdown
Final Loss The final loss is the sum of the margin loss and the reconstruction loss (scaled down by a factor of 0.0005 to ensure the margin loss dominates training):
###Code
alpha = 0.0005
loss = tf.add(margin_loss, alpha * reconstruction_loss, name="loss")
###Output
_____no_output_____
###Markdown
Final Touches Accuracy To measure our model's accuracy, we need to count the number of instances that are properly classified. For this, we can simply compare `y` and `y_pred`, convert the boolean value to a float32 (0.0 for False, 1.0 for True), and compute the mean over all the instances:
###Code
correct = tf.equal(y, y_pred, name="correct")
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy")
###Output
_____no_output_____
###Markdown
Training Operations The paper mentions that the authors used the Adam optimizer with TensorFlow's default parameters:
###Code
optimizer = tf.train.AdamOptimizer()
training_op = optimizer.minimize(loss, name="training_op")
###Output
_____no_output_____
###Markdown
Init and Saver And let's add the usual variable initializer, as well as a `Saver`:
###Code
init = tf.global_variables_initializer()
saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
And... we're done with the construction phase! Please take a moment to celebrate. :) Training Training our capsule network is pretty standard. For simplicity, we won't do any fancy hyperparameter tuning, dropout or anything, we will just run the training operation over and over again, displaying the loss, and at the end of each epoch, measure the accuracy on the validation set, display it, and save the model if the validation loss is the lowest seen found so far (this is a basic way to implement early stopping, without actually stopping). Hopefully the code should be self-explanatory, but here are a few details to note:* if a checkpoint file exists, it will be restored (this makes it possible to interrupt training, then restart it later from the last checkpoint),* we must not forget to feed `mask_with_labels=True` during training,* during testing, we let `mask_with_labels` default to `False` (but we still feed the labels since they are required to compute the accuracy),* the images loaded _via_ `mnist.train.next_batch()` are represented as `float32` arrays of shape \[784\], but the input placeholder `X` expects a `float32` array of shape \[28, 28, 1\], so we must reshape the images before we feed them to our model,* we evaluate the model's loss and accuracy on the full validation set (5,000 instances). To view progress and support systems that don't have a lot of RAM, the code evaluates the loss and accuracy on one batch at a time, and computes the mean loss and mean accuracy at the end.*Warning*: if you don't have a GPU, training will take a very long time (at least a few hours). With a GPU, it should take just a few minutes per epoch (e.g., 6 minutes on an NVidia GeForce GTX 1080Ti).
###Code
n_epochs = 10
batch_size = 50
restore_checkpoint = True
n_iterations_per_epoch = mnist.train.num_examples // batch_size
n_iterations_validation = mnist.validation.num_examples // batch_size
best_loss_val = np.infty
checkpoint_path = "./my_capsule_network"
with tf.Session() as sess:
if restore_checkpoint and tf.train.checkpoint_exists(checkpoint_path):
saver.restore(sess, checkpoint_path)
else:
init.run()
for epoch in range(n_epochs):
for iteration in range(1, n_iterations_per_epoch + 1):
X_batch, y_batch = mnist.train.next_batch(batch_size)
# Run the training operation and measure the loss:
_, loss_train = sess.run(
[training_op, loss],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch,
mask_with_labels: True})
print("\rIteration: {}/{} ({:.1f}%) Loss: {:.5f}".format(
iteration, n_iterations_per_epoch,
iteration * 100 / n_iterations_per_epoch,
loss_train),
end="")
# At the end of each epoch,
# measure the validation loss and accuracy:
loss_vals = []
acc_vals = []
for iteration in range(1, n_iterations_validation + 1):
X_batch, y_batch = mnist.validation.next_batch(batch_size)
loss_val, acc_val = sess.run(
[loss, accuracy],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch})
loss_vals.append(loss_val)
acc_vals.append(acc_val)
print("\rEvaluating the model: {}/{} ({:.1f}%)".format(
iteration, n_iterations_validation,
iteration * 100 / n_iterations_validation),
end=" " * 10)
loss_val = np.mean(loss_vals)
acc_val = np.mean(acc_vals)
print("\rEpoch: {} Val accuracy: {:.4f}% Loss: {:.6f}{}".format(
epoch + 1, acc_val * 100, loss_val,
" (improved)" if loss_val < best_loss_val else ""))
# And save the model if it improved:
if loss_val < best_loss_val:
save_path = saver.save(sess, checkpoint_path)
best_loss_val = loss_val
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
Epoch: 1 Val accuracy: 99.4400% Loss: 0.007998 (improved)
Epoch: 2 Val accuracy: 99.3400% Loss: 0.007959 (improved)
Epoch: 3 Val accuracy: 99.4000% Loss: 0.007436 (improved)
Epoch: 4 Val accuracy: 99.4000% Loss: 0.007568
Epoch: 5 Val accuracy: 99.2600% Loss: 0.007464
Epoch: 6 Val accuracy: 99.4800% Loss: 0.006631 (improved)
Epoch: 7 Val accuracy: 99.4000% Loss: 0.006915
Epoch: 8 Val accuracy: 99.4200% Loss: 0.006735
Epoch: 9 Val accuracy: 99.2200% Loss: 0.007709
Epoch: 10 Val accuracy: 99.4000% Loss: 0.007083
###Markdown
Training is finished, we reached over 99.4% accuracy on the validation set after just 5 epochs, things are looking good. Now let's evaluate the model on the test set. Evaluation
###Code
n_iterations_test = mnist.test.num_examples // batch_size
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
loss_tests = []
acc_tests = []
for iteration in range(1, n_iterations_test + 1):
X_batch, y_batch = mnist.test.next_batch(batch_size)
loss_test, acc_test = sess.run(
[loss, accuracy],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch})
loss_tests.append(loss_test)
acc_tests.append(acc_test)
print("\rEvaluating the model: {}/{} ({:.1f}%)".format(
iteration, n_iterations_test,
iteration * 100 / n_iterations_test),
end=" " * 10)
loss_test = np.mean(loss_tests)
acc_test = np.mean(acc_tests)
print("\rFinal test accuracy: {:.4f}% Loss: {:.6f}".format(
acc_test * 100, loss_test))
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
Final test accuracy: 99.5300% Loss: 0.006631
###Markdown
We reach 99.53% accuracy on the test set. Pretty nice. :) Predictions Now let's make some predictions! We first fix a few images from the test set, then we start a session, restore the trained model, evaluate `caps2_output` to get the capsule network's output vectors, `decoder_output` to get the reconstructions, and `y_pred` to get the class predictions:
###Code
n_samples = 5
sample_images = mnist.test.images[:n_samples].reshape([-1, 28, 28, 1])
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
caps2_output_value, decoder_output_value, y_pred_value = sess.run(
[caps2_output, decoder_output, y_pred],
feed_dict={X: sample_images,
y: np.array([], dtype=np.int64)})
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
###Markdown
Note: we feed `y` with an empty array, but TensorFlow will not use it, as explained earlier. And now let's plot the images and their labels, followed by the corresponding reconstructions and predictions:
###Code
sample_images = sample_images.reshape(-1, 28, 28)
reconstructions = decoder_output_value.reshape([-1, 28, 28])
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
plt.imshow(sample_images[index], cmap="binary")
plt.title("Label:" + str(mnist.test.labels[index]))
plt.axis("off")
plt.show()
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
plt.title("Predicted:" + str(y_pred_value[index]))
plt.imshow(reconstructions[index], cmap="binary")
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
The predictions are all correct, and the reconstructions look great. Hurray! Interpreting the Output Vectors Let's tweak the output vectors to see what their pose parameters represent. First, let's check the shape of the `cap2_output_value` NumPy array:
###Code
caps2_output_value.shape
###Output
_____no_output_____
###Markdown
Let's create a function that will tweak each of the 16 pose parameters (dimensions) in all output vectors. Each tweaked output vector will be identical to the original output vector, except that one of its pose parameters will be incremented by a value varying from -0.5 to 0.5. By default there will be 11 steps (-0.5, -0.4, ..., +0.4, +0.5). This function will return an array of shape (_tweaked pose parameters_=16, _steps_=11, _batch size_=5, 1, 10, 16, 1):
###Code
def tweak_pose_parameters(output_vectors, min=-0.5, max=0.5, n_steps=11):
steps = np.linspace(min, max, n_steps) # -0.25, -0.15, ..., +0.25
pose_parameters = np.arange(caps2_n_dims) # 0, 1, ..., 15
tweaks = np.zeros([caps2_n_dims, n_steps, 1, 1, 1, caps2_n_dims, 1])
tweaks[pose_parameters, :, 0, 0, 0, pose_parameters, 0] = steps
output_vectors_expanded = output_vectors[np.newaxis, np.newaxis]
return tweaks + output_vectors_expanded
###Output
_____no_output_____
###Markdown
Let's compute all the tweaked output vectors and reshape the result to (_parameters_×_steps_×_instances_, 1, 10, 16, 1) so we can feed the array to the decoder:
###Code
n_steps = 11
tweaked_vectors = tweak_pose_parameters(caps2_output_value, n_steps=n_steps)
tweaked_vectors_reshaped = tweaked_vectors.reshape(
[-1, 1, caps2_n_caps, caps2_n_dims, 1])
###Output
_____no_output_____
###Markdown
Now let's feed these tweaked output vectors to the decoder and get the reconstructions it produces:
###Code
tweak_labels = np.tile(mnist.test.labels[:n_samples], caps2_n_dims * n_steps)
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
decoder_output_value = sess.run(
decoder_output,
feed_dict={caps2_output: tweaked_vectors_reshaped,
mask_with_labels: True,
y: tweak_labels})
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
###Markdown
Let's reshape the decoder's output so we can easily iterate on the output dimension, the tweak steps, and the instances:
###Code
tweak_reconstructions = decoder_output_value.reshape(
[caps2_n_dims, n_steps, n_samples, 28, 28])
###Output
_____no_output_____
###Markdown
Lastly, let's plot all the reconstructions, for the first 3 output dimensions, for each tweaking step (column) and each digit (row):
###Code
for dim in range(3):
print("Tweaking output dimension #{}".format(dim))
plt.figure(figsize=(n_steps / 1.2, n_samples / 1.5))
for row in range(n_samples):
for col in range(n_steps):
plt.subplot(n_samples, n_steps, row * n_steps + col + 1)
plt.imshow(tweak_reconstructions[dim, col, row], cmap="binary")
plt.axis("off")
plt.show()
###Output
Tweaking output dimension #0
###Markdown
Capsule Networks (CapsNets) 胶囊网络(CapsNets) Based on the paper: [Dynamic Routing Between Capsules](https://arxiv.org/abs/1710.09829), by Sara Sabour, Nicholas Frosst and Geoffrey E. Hinton (NIPS 2017).基于这篇论文:[Dynamic Routing Between Capsules](https://arxiv.org/abs/1710.09829),作者Sara Sabour, Nicholas Frosst 和 Geoffrey E. Hinton (NIPS 2017) Inspired in part from Huadong Liao's implementation: [CapsNet-TensorFlow](https://github.com/naturomics/CapsNet-Tensorflow).灵感来自Liao Huadong的implementation: [CapsNet-TensorFlow](https://github.com/naturomics/CapsNet-Tensorflow). Introduction 介绍 Watch [this video](https://youtu.be/pPN8d0E3900) to understand the key ideas behind Capsule Networks:观看[这个视频](https://youtu.be/pPN8d0E3900)来理解胶囊网络背后的关键理念:
###Code
from IPython.display import HTML
HTML("""<iframe width="560" height="315" src="https://www.youtube.com/embed/pPN8d0E3900" frameborder="0" allowfullscreen></iframe>""")
###Output
_____no_output_____
###Markdown
You may also want to watch [this video](https://youtu.be/2Kawrd5szHE), which presents the main difficulties in this notebook:
###Code
HTML("""<iframe width="560" height="315" src="https://www.youtube.com/embed/2Kawrd5szHE" frameborder="0" allowfullscreen></iframe>""")
###Output
_____no_output_____
###Markdown
Imports To support both Python 2 and Python 3:
###Code
from __future__ import division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
To plot pretty figures:
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
We will need NumPy and TensorFlow:
###Code
import numpy as np
import tensorflow as tf
###Output
/Users/ageron/.virtualenvs/ml/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(*args, **kwds)
###Markdown
Reproducibility Let's reset the default graph, in case you re-run this notebook without restarting the kernel:
###Code
tf.reset_default_graph()
###Output
_____no_output_____
###Markdown
Let's set the random seeds so that this notebook always produces the same output:
###Code
np.random.seed(42)
tf.set_random_seed(42)
###Output
_____no_output_____
###Markdown
Load MNIST Yes, I know, it's MNIST again. But hopefully this powerful idea will work as well on larger datasets, time will tell.
###Code
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
###Output
Extracting /tmp/data/train-images-idx3-ubyte.gz
Extracting /tmp/data/train-labels-idx1-ubyte.gz
Extracting /tmp/data/t10k-images-idx3-ubyte.gz
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
###Markdown
Let's look at what these hand-written digit images look like:
###Code
n_samples = 5
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
sample_image = mnist.train.images[index].reshape(28, 28)
plt.imshow(sample_image, cmap="binary")
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
And these are the corresponding labels:
###Code
mnist.train.labels[:n_samples]
###Output
_____no_output_____
###Markdown
Now let's build a Capsule Network to classify these images. Here's the overall architecture, enjoy the ASCII art! ;-)Note: for readability, I left out two arrows: Labels → Mask, and Input Images → Reconstruction Loss. ``` Loss ↑ ┌─────────┴─────────┐ Labels → Margin Loss Reconstruction Loss ↑ ↑ Length Decoder ↑ ↑ Digit Capsules ────Mask────┘ ↖↑↗ ↖↑↗ ↖↑↗ Primary Capsules ↑ Input Images``` We are going to build the graph starting from the bottom layer, and gradually move up, left side first. Let's go! Input Images Let's start by creating a placeholder for the input images (28×28 pixels, 1 color channel = grayscale).
###Code
X = tf.placeholder(shape=[None, 28, 28, 1], dtype=tf.float32, name="X")
###Output
_____no_output_____
###Markdown
Primary Capsules The first layer will be composed of 32 maps of 6×6 capsules each, where each capsule will output an 8D activation vector:
###Code
caps1_n_maps = 32
caps1_n_caps = caps1_n_maps * 6 * 6 # 1152 primary capsules
caps1_n_dims = 8
###Output
_____no_output_____
###Markdown
To compute their outputs, we first apply two regular convolutional layers:
###Code
conv1_params = {
"filters": 256,
"kernel_size": 9,
"strides": 1,
"padding": "valid",
"activation": tf.nn.relu,
}
conv2_params = {
"filters": caps1_n_maps * caps1_n_dims, # 256 convolutional filters
"kernel_size": 9,
"strides": 2,
"padding": "valid",
"activation": tf.nn.relu
}
conv1 = tf.layers.conv2d(X, name="conv1", **conv1_params)
conv2 = tf.layers.conv2d(conv1, name="conv2", **conv2_params)
###Output
_____no_output_____
###Markdown
Note: since we used a kernel size of 9 and no padding (for some reason, that's what `"valid"` means), the image shrunk by 9-1=8 pixels after each convolutional layer (28×28 to 20×20, then 20×20 to 12×12), and since we used a stride of 2 in the second convolutional layer, the image size was divided by 2. This is how we end up with 6×6 feature maps. Next, we reshape the output to get a bunch of 8D vectors representing the outputs of the primary capsules. The output of `conv2` is an array containing 32×8=256 feature maps for each instance, where each feature map is 6×6. So the shape of this output is (_batch size_, 6, 6, 256). We want to chop the 256 into 32 vectors of 8 dimensions each. We could do this by reshaping to (_batch size_, 6, 6, 32, 8). However, since this first capsule layer will be fully connected to the next capsule layer, we can simply flatten the 6×6 grids. This means we just need to reshape to (_batch size_, 6×6×32, 8).
###Code
caps1_raw = tf.reshape(conv2, [-1, caps1_n_caps, caps1_n_dims],
name="caps1_raw")
###Output
_____no_output_____
###Markdown
Now we need to squash these vectors. Let's define the `squash()` function, based on equation (1) from the paper:$\operatorname{squash}(\mathbf{s}) = \dfrac{\|\mathbf{s}\|^2}{1 + \|\mathbf{s}\|^2} \dfrac{\mathbf{s}}{\|\mathbf{s}\|}$The `squash()` function will squash all vectors in the given array, along the given axis (by default, the last axis).**Caution**, a nasty bug is waiting to bite you: the derivative of $\|\mathbf{s}\|$ is undefined when $\|\mathbf{s}\|=0$, so we can't just use `tf.norm()`, or else it will blow up during training: if a vector is zero, the gradients will be `nan`, so when the optimizer updates the variables, they will also become `nan`, and from then on you will be stuck in `nan` land. The solution is to implement the norm manually by computing the square root of the sum of squares plus a tiny epsilon value: $\|\mathbf{s}\| \approx \sqrt{\sum\limits_i{{s_i}^2}\,\,+ \epsilon}$.
###Code
def squash(s, axis=-1, epsilon=1e-7, name=None):
with tf.name_scope(name, default_name="squash"):
squared_norm = tf.reduce_sum(tf.square(s), axis=axis,
keep_dims=True)
safe_norm = tf.sqrt(squared_norm + epsilon)
squash_factor = squared_norm / (1. + squared_norm)
unit_vector = s / safe_norm
return squash_factor * unit_vector
###Output
_____no_output_____
###Markdown
Now let's apply this function to get the output $\mathbf{u}_i$ of each primary capsules $i$ :
###Code
caps1_output = squash(caps1_raw, name="caps1_output")
###Output
_____no_output_____
###Markdown
Great! We have the output of the first capsule layer. It wasn't too hard, was it? However, computing the next layer is where the fun really begins. Digit Capsules To compute the output of the digit capsules, we must first compute the predicted output vectors (one for each primary / digit capsule pair). Then we can run the routing by agreement algorithm. Compute the Predicted Output Vectors The digit capsule layer contains 10 capsules (one for each digit) of 16 dimensions each:
###Code
caps2_n_caps = 10
caps2_n_dims = 16
###Output
_____no_output_____
###Markdown
For each capsule $i$ in the first layer, we want to predict the output of every capsule $j$ in the second layer. For this, we will need a transformation matrix $\mathbf{W}_{i,j}$ (one for each pair of capsules ($i$, $j$)), then we can compute the predicted output $\hat{\mathbf{u}}_{j|i} = \mathbf{W}_{i,j} \, \mathbf{u}_i$ (equation (2)-right in the paper). Since we want to transform an 8D vector into a 16D vector, each transformation matrix $\mathbf{W}_{i,j}$ must have a shape of (16, 8). To compute $\hat{\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$), we will use a nice feature of the `tf.matmul()` function: you probably know that it lets you multiply two matrices, but you may not know that it also lets you multiply higher dimensional arrays. It treats the arrays as arrays of matrices, and it performs itemwise matrix multiplication. For example, suppose you have two 4D arrays, each containing a 2×3 grid of matrices. The first contains matrices $\mathbf{A}, \mathbf{B}, \mathbf{C}, \mathbf{D}, \mathbf{E}, \mathbf{F}$ and the second contains matrices $\mathbf{G}, \mathbf{H}, \mathbf{I}, \mathbf{J}, \mathbf{K}, \mathbf{L}$. If you multiply these two 4D arrays using the `tf.matmul()` function, this is what you get:$\pmatrix{\mathbf{A} & \mathbf{B} & \mathbf{C} \\\mathbf{D} & \mathbf{E} & \mathbf{F}} \times\pmatrix{\mathbf{G} & \mathbf{H} & \mathbf{I} \\\mathbf{J} & \mathbf{K} & \mathbf{L}} = \pmatrix{\mathbf{AG} & \mathbf{BH} & \mathbf{CI} \\\mathbf{DJ} & \mathbf{EK} & \mathbf{FL}}$ We can apply this function to compute $\hat{\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$) like this (recall that there are 6×6×32=1152 capsules in the first layer, and 10 in the second layer):$\pmatrix{ \mathbf{W}_{1,1} & \mathbf{W}_{1,2} & \cdots & \mathbf{W}_{1,10} \\ \mathbf{W}_{2,1} & \mathbf{W}_{2,2} & \cdots & \mathbf{W}_{2,10} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{W}_{1152,1} & \mathbf{W}_{1152,2} & \cdots & \mathbf{W}_{1152,10}} \times\pmatrix{ \mathbf{u}_1 & \mathbf{u}_1 & \cdots & \mathbf{u}_1 \\ \mathbf{u}_2 & \mathbf{u}_2 & \cdots & \mathbf{u}_2 \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{u}_{1152} & \mathbf{u}_{1152} & \cdots & \mathbf{u}_{1152}}=\pmatrix{\hat{\mathbf{u}}_{1|1} & \hat{\mathbf{u}}_{2|1} & \cdots & \hat{\mathbf{u}}_{10|1} \\\hat{\mathbf{u}}_{1|2} & \hat{\mathbf{u}}_{2|2} & \cdots & \hat{\mathbf{u}}_{10|2} \\\vdots & \vdots & \ddots & \vdots \\\hat{\mathbf{u}}_{1|1152} & \hat{\mathbf{u}}_{2|1152} & \cdots & \hat{\mathbf{u}}_{10|1152}}$ The shape of the first array is (1152, 10, 16, 8), and the shape of the second array is (1152, 10, 8, 1). Note that the second array must contain 10 identical copies of the vectors $\mathbf{u}_1$ to $\mathbf{u}_{1152}$. To create this array, we will use the handy `tf.tile()` function, which lets you create an array containing many copies of a base array, tiled in any way you want. Oh, wait a second! We forgot one dimension: _batch size_. Say we feed 50 images to the capsule network, it will make predictions for these 50 images simultaneously. So the shape of the first array must be (50, 1152, 10, 16, 8), and the shape of the second array must be (50, 1152, 10, 8, 1). The first layer capsules actually already output predictions for all 50 images, so the second array will be fine, but for the first array, we will need to use `tf.tile()` to have 50 copies of the transformation matrices. Okay, let's start by creating a trainable variable of shape (1, 1152, 10, 16, 8) that will hold all the transformation matrices. The first dimension of size 1 will make this array easy to tile. We initialize this variable randomly using a normal distribution with a standard deviation to 0.1.
###Code
init_sigma = 0.1
W_init = tf.random_normal(
shape=(1, caps1_n_caps, caps2_n_caps, caps2_n_dims, caps1_n_dims),
stddev=init_sigma, dtype=tf.float32, name="W_init")
W = tf.Variable(W_init, name="W")
###Output
_____no_output_____
###Markdown
Now we can create the first array by repeating `W` once per instance:
###Code
batch_size = tf.shape(X)[0]
W_tiled = tf.tile(W, [batch_size, 1, 1, 1, 1], name="W_tiled")
###Output
_____no_output_____
###Markdown
That's it! On to the second array, now. As discussed earlier, we need to create an array of shape (_batch size_, 1152, 10, 8, 1), containing the output of the first layer capsules, repeated 10 times (once per digit, along the third dimension, which is axis=2). The `caps1_output` array has a shape of (_batch size_, 1152, 8), so we first need to expand it twice, to get an array of shape (_batch size_, 1152, 1, 8, 1), then we can repeat it 10 times along the third dimension:
###Code
caps1_output_expanded = tf.expand_dims(caps1_output, -1,
name="caps1_output_expanded")
caps1_output_tile = tf.expand_dims(caps1_output_expanded, 2,
name="caps1_output_tile")
caps1_output_tiled = tf.tile(caps1_output_tile, [1, 1, caps2_n_caps, 1, 1],
name="caps1_output_tiled")
###Output
_____no_output_____
###Markdown
Let's check the shape of the first array:
###Code
W_tiled
###Output
_____no_output_____
###Markdown
Good, and now the second:
###Code
caps1_output_tiled
###Output
_____no_output_____
###Markdown
Yes! Now, to get all the predicted output vectors $\hat{\mathbf{u}}_{j|i}$, we just need to multiply these two arrays using `tf.matmul()`, as explained earlier:
###Code
caps2_predicted = tf.matmul(W_tiled, caps1_output_tiled,
name="caps2_predicted")
###Output
_____no_output_____
###Markdown
Let's check the shape:
###Code
caps2_predicted
###Output
_____no_output_____
###Markdown
Perfect, for each instance in the batch (we don't know the batch size yet, hence the "?") and for each pair of first and second layer capsules (1152×10) we have a 16D predicted output column vector (16×1). We're ready to apply the routing by agreement algorithm! Routing by agreement First let's initialize the raw routing weights $b_{i,j}$ to zero:
###Code
raw_weights = tf.zeros([batch_size, caps1_n_caps, caps2_n_caps, 1, 1],
dtype=np.float32, name="raw_weights")
###Output
_____no_output_____
###Markdown
We will see why we need the last two dimensions of size 1 in a minute. Round 1 First, let's apply the softmax function to compute the routing weights, $\mathbf{c}_{i} = \operatorname{softmax}(\mathbf{b}_i)$ (equation (3) in the paper):
###Code
routing_weights = tf.nn.softmax(raw_weights, dim=2, name="routing_weights")
###Output
_____no_output_____
###Markdown
Now let's compute the weighted sum of all the predicted output vectors for each second-layer capsule, $\mathbf{s}_j = \sum\limits_{i}{c_{i,j}\hat{\mathbf{u}}_{j|i}}$ (equation (2)-left in the paper):
###Code
weighted_predictions = tf.multiply(routing_weights, caps2_predicted,
name="weighted_predictions")
weighted_sum = tf.reduce_sum(weighted_predictions, axis=1, keep_dims=True,
name="weighted_sum")
###Output
_____no_output_____
###Markdown
There are a couple important details to note here:* To perform elementwise matrix multiplication (also called the Hadamard product, noted $\circ$), we use the `tf.multiply()` function. It requires `routing_weights` and `caps2_predicted` to have the same rank, which is why we added two extra dimensions of size 1 to `routing_weights`, earlier.* The shape of `routing_weights` is (_batch size_, 1152, 10, 1, 1) while the shape of `caps2_predicted` is (_batch size_, 1152, 10, 16, 1). Since they don't match on the fourth dimension (1 _vs_ 16), `tf.multiply()` automatically _broadcasts_ the `routing_weights` 16 times along that dimension. If you are not familiar with broadcasting, a simple example might help: $ \pmatrix{1 & 2 & 3 \\ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000} = \pmatrix{1 & 2 & 3 \\ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000 \\ 10 & 100 & 1000} = \pmatrix{10 & 200 & 3000 \\ 40 & 500 & 6000} $ And finally, let's apply the squash function to get the outputs of the second layer capsules at the end of the first iteration of the routing by agreement algorithm, $\mathbf{v}_j = \operatorname{squash}(\mathbf{s}_j)$ :
###Code
caps2_output_round_1 = squash(weighted_sum, axis=-2,
name="caps2_output_round_1")
caps2_output_round_1
###Output
_____no_output_____
###Markdown
Good! We have ten 16D output vectors for each instance, as expected. Round 2 First, let's measure how close each predicted vector $\hat{\mathbf{u}}_{j|i}$ is to the actual output vector $\mathbf{v}_j$ by computing their scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$. * Quick math reminder: if $\vec{a}$ and $\vec{b}$ are two vectors of equal length, and $\mathbf{a}$ and $\mathbf{b}$ are their corresponding column vectors (i.e., matrices with a single column), then $\mathbf{a}^T \mathbf{b}$ (i.e., the matrix multiplication of the transpose of $\mathbf{a}$, and $\mathbf{b}$) is a 1×1 matrix containing the scalar product of the two vectors $\vec{a}\cdot\vec{b}$. In Machine Learning, we generally represent vectors as column vectors, so when we talk about computing the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$, this actually means computing ${\hat{\mathbf{u}}_{j|i}}^T \mathbf{v}_j$. Since we need to compute the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ for each instance, and for each pair of first and second level capsules $(i, j)$, we will once again take advantage of the fact that `tf.matmul()` can multiply many matrices simultaneously. This will require playing around with `tf.tile()` to get all dimensions to match (except for the last 2), just like we did earlier. So let's look at the shape of `caps2_predicted`, which holds all the predicted output vectors $\hat{\mathbf{u}}_{j|i}$ for each instance and each pair of capsules:
###Code
caps2_predicted
###Output
_____no_output_____
###Markdown
And now let's look at the shape of `caps2_output_round_1`, which holds 10 outputs vectors of 16D each, for each instance:
###Code
caps2_output_round_1
###Output
_____no_output_____
###Markdown
To get these shapes to match, we just need to tile the `caps2_output_round_1` array 1152 times (once per primary capsule) along the second dimension:
###Code
caps2_output_round_1_tiled = tf.tile(
caps2_output_round_1, [1, caps1_n_caps, 1, 1, 1],
name="caps2_output_round_1_tiled")
###Output
_____no_output_____
###Markdown
And now we are ready to call `tf.matmul()` (note that we must tell it to transpose the matrices in the first array, to get ${\hat{\mathbf{u}}_{j|i}}^T$ instead of $\hat{\mathbf{u}}_{j|i}$):
###Code
agreement = tf.matmul(caps2_predicted, caps2_output_round_1_tiled,
transpose_a=True, name="agreement")
###Output
_____no_output_____
###Markdown
We can now update the raw routing weights $b_{i,j}$ by simply adding the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ we just computed: $b_{i,j} \gets b_{i,j} + \hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ (see Procedure 1, step 7, in the paper).
###Code
raw_weights_round_2 = tf.add(raw_weights, agreement,
name="raw_weights_round_2")
###Output
_____no_output_____
###Markdown
The rest of round 2 is the same as in round 1:
###Code
routing_weights_round_2 = tf.nn.softmax(raw_weights_round_2,
dim=2,
name="routing_weights_round_2")
weighted_predictions_round_2 = tf.multiply(routing_weights_round_2,
caps2_predicted,
name="weighted_predictions_round_2")
weighted_sum_round_2 = tf.reduce_sum(weighted_predictions_round_2,
axis=1, keep_dims=True,
name="weighted_sum_round_2")
caps2_output_round_2 = squash(weighted_sum_round_2,
axis=-2,
name="caps2_output_round_2")
###Output
_____no_output_____
###Markdown
We could go on for a few more rounds, by repeating exactly the same steps as in round 2, but to keep things short, we will stop here:
###Code
caps2_output = caps2_output_round_2
###Output
_____no_output_____
###Markdown
Static or Dynamic Loop? In the code above, we created different operations in the TensorFlow graph for each round of the routing by agreement algorithm. In other words, it's a static loop.Sure, instead of copy/pasting the code several times, we could have written a `for` loop in Python, but this would not change the fact that the graph would end up containing different operations for each routing iteration. It's actually okay since we generally want less than 5 routing iterations, so the graph won't grow too big.However, you may prefer to implement the routing loop within the TensorFlow graph itself rather than using a Python `for` loop. To do this, you would need to use TensorFlow's `tf.while_loop()` function. This way, all routing iterations would reuse the same operations in the graph, it would be a dynamic loop.For example, here is how to build a small loop that computes the sum of squares from 1 to 100:
###Code
def condition(input, counter):
return tf.less(counter, 100)
def loop_body(input, counter):
output = tf.add(input, tf.square(counter))
return output, tf.add(counter, 1)
with tf.name_scope("compute_sum_of_squares"):
counter = tf.constant(1)
sum_of_squares = tf.constant(0)
result = tf.while_loop(condition, loop_body, [sum_of_squares, counter])
with tf.Session() as sess:
print(sess.run(result))
###Output
(328350, 100)
###Markdown
As you can see, the `tf.while_loop()` function expects the loop condition and body to be provided _via_ two functions. These functions will be called only once by TensorFlow, during the graph construction phase, _not_ while executing the graph. The `tf.while_loop()` function stitches together the graph fragments created by `condition()` and `loop_body()` with some additional operations to create the loop.Also note that during training, TensorFlow will automagically handle backpropogation through the loop, so you don't need to worry about that. Of course, we could have used this one-liner instead! ;-)
###Code
sum([i**2 for i in range(1, 100 + 1)])
###Output
_____no_output_____
###Markdown
Joke aside, apart from reducing the graph size, using a dynamic loop instead of a static loop can help reduce how much GPU RAM you use (if you are using a GPU). Indeed, if you set `swap_memory=True` when calling the `tf.while_loop()` function, TensorFlow will automatically check GPU RAM usage at each loop iteration, and it will take care of swapping memory between the GPU and the CPU when needed. Since CPU memory is much cheaper and abundant than GPU RAM, this can really make a big difference. Estimated Class Probabilities (Length) The lengths of the output vectors represent the class probabilities, so we could just use `tf.norm()` to compute them, but as we saw when discussing the squash function, it would be risky, so instead let's create our own `safe_norm()` function:
###Code
def safe_norm(s, axis=-1, epsilon=1e-7, keep_dims=False, name=None):
with tf.name_scope(name, default_name="safe_norm"):
squared_norm = tf.reduce_sum(tf.square(s), axis=axis,
keep_dims=keep_dims)
return tf.sqrt(squared_norm + epsilon)
y_proba = safe_norm(caps2_output, axis=-2, name="y_proba")
###Output
_____no_output_____
###Markdown
To predict the class of each instance, we can just select the one with the highest estimated probability. To do this, let's start by finding its index using `tf.argmax()`:
###Code
y_proba_argmax = tf.argmax(y_proba, axis=2, name="y_proba")
###Output
_____no_output_____
###Markdown
Let's look at the shape of `y_proba_argmax`:
###Code
y_proba_argmax
###Output
_____no_output_____
###Markdown
That's what we wanted: for each instance, we now have the index of the longest output vector. Let's get rid of the last two dimensions by using `tf.squeeze()` which removes dimensions of size 1. This gives us the capsule network's predicted class for each instance:
###Code
y_pred = tf.squeeze(y_proba_argmax, axis=[1,2], name="y_pred")
y_pred
###Output
_____no_output_____
###Markdown
Okay, we are now ready to define the training operations, starting with the losses. Labels First, we will need a placeholder for the labels:
###Code
y = tf.placeholder(shape=[None], dtype=tf.int64, name="y")
###Output
_____no_output_____
###Markdown
Margin loss The paper uses a special margin loss to make it possible to detect two or more different digits in each image:$ L_k = T_k \max(0, m^{+} - \|\mathbf{v}_k\|)^2 + \lambda (1 - T_k) \max(0, \|\mathbf{v}_k\| - m^{-})^2$* $T_k$ is equal to 1 if the digit of class $k$ is present, or 0 otherwise.* In the paper, $m^{+} = 0.9$, $m^{-} = 0.1$ and $\lambda = 0.5$.* Note that there was an error in the video (at 15:47): the max operations are squared, not the norms. Sorry about that.
###Code
m_plus = 0.9
m_minus = 0.1
lambda_ = 0.5
###Output
_____no_output_____
###Markdown
Since `y` will contain the digit classes, from 0 to 9, to get $T_k$ for every instance and every class, we can just use the `tf.one_hot()` function:
###Code
T = tf.one_hot(y, depth=caps2_n_caps, name="T")
###Output
_____no_output_____
###Markdown
A small example should make it clear what this does:
###Code
with tf.Session():
print(T.eval(feed_dict={y: np.array([0, 1, 2, 3, 9])}))
###Output
[[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]
###Markdown
Now let's compute the norm of the output vector for each output capsule and each instance. First, let's verify the shape of `caps2_output`:
###Code
caps2_output
###Output
_____no_output_____
###Markdown
The 16D output vectors are in the second to last dimension, so let's use the `safe_norm()` function with `axis=-2`:
###Code
caps2_output_norm = safe_norm(caps2_output, axis=-2, keep_dims=True,
name="caps2_output_norm")
###Output
_____no_output_____
###Markdown
Now let's compute $\max(0, m^{+} - \|\mathbf{v}_k\|)^2$, and reshape the result to get a simple matrix of shape (_batch size_, 10):
###Code
present_error_raw = tf.square(tf.maximum(0., m_plus - caps2_output_norm),
name="present_error_raw")
present_error = tf.reshape(present_error_raw, shape=(-1, 10),
name="present_error")
###Output
_____no_output_____
###Markdown
Next let's compute $\max(0, \|\mathbf{v}_k\| - m^{-})^2$ and reshape it:
###Code
absent_error_raw = tf.square(tf.maximum(0., caps2_output_norm - m_minus),
name="absent_error_raw")
absent_error = tf.reshape(absent_error_raw, shape=(-1, 10),
name="absent_error")
###Output
_____no_output_____
###Markdown
We are ready to compute the loss for each instance and each digit:
###Code
L = tf.add(T * present_error, lambda_ * (1.0 - T) * absent_error,
name="L")
###Output
_____no_output_____
###Markdown
Now we can sum the digit losses for each instance ($L_0 + L_1 + \cdots + L_9$), and compute the mean over all instances. This gives us the final margin loss:
###Code
margin_loss = tf.reduce_mean(tf.reduce_sum(L, axis=1), name="margin_loss")
###Output
_____no_output_____
###Markdown
Reconstruction Now let's add a decoder network on top of the capsule network. It is a regular 3-layer fully connected neural network which will learn to reconstruct the input images based on the output of the capsule network. This will force the capsule network to preserve all the information required to reconstruct the digits, across the whole network. This constraint regularizes the model: it reduces the risk of overfitting the training set, and it helps generalize to new digits. Mask The paper mentions that during training, instead of sending all the outputs of the capsule network to the decoder network, we must send only the output vector of the capsule that corresponds to the target digit. All the other output vectors must be masked out. At inference time, we must mask all output vectors except for the longest one, i.e., the one that corresponds to the predicted digit. You can see this in the paper's figure 2 (at 18:15 in the video): all output vectors are masked out, except for the reconstruction target's output vector. We need a placeholder to tell TensorFlow whether we want to mask the output vectors based on the labels (`True`) or on the predictions (`False`, the default):
###Code
mask_with_labels = tf.placeholder_with_default(False, shape=(),
name="mask_with_labels")
###Output
_____no_output_____
###Markdown
Now let's use `tf.cond()` to define the reconstruction targets as the labels `y` if `mask_with_labels` is `True`, or `y_pred` otherwise.
###Code
reconstruction_targets = tf.cond(mask_with_labels, # condition
lambda: y, # if True
lambda: y_pred, # if False
name="reconstruction_targets")
###Output
_____no_output_____
###Markdown
Note that the `tf.cond()` function expects the if-True and if-False tensors to be passed _via_ functions: these functions will be called just once during the graph construction phase (not during the execution phase), similar to `tf.while_loop()`. This allows TensorFlow to add the necessary operations to handle the conditional evaluation of the if-True or if-False tensors. However, in our case, the tensors `y` and `y_pred` are already created by the time we call `tf.cond()`, so unfortunately TensorFlow will consider both `y` and `y_pred` to be dependencies of the `reconstruction_targets` tensor. The `reconstruction_targets` tensor will end up with the correct value, but:1. whenever we evaluate a tensor that depends on `reconstruction_targets`, the `y_pred` tensor will be evaluated (even if `mask_with_layers` is `True`). This is not a big deal because computing `y_pred` adds no computing overhead during training, since we need it anyway to compute the margin loss. And during testing, if we are doing classification, we won't need reconstructions, so `reconstruction_targets` won't be evaluated at all.2. we will always need to feed a value for the `y` placeholder (even if `mask_with_layers` is `False`). This is a bit annoying, but we can pass an empty array, because TensorFlow won't use it anyway (it just does not know it yet when it checks for dependencies). Now that we have the reconstruction targets, let's create the reconstruction mask. It should be equal to 1.0 for the target class, and 0.0 for the other classes, for each instance. For this we can just use the `tf.one_hot()` function:
###Code
reconstruction_mask = tf.one_hot(reconstruction_targets,
depth=caps2_n_caps,
name="reconstruction_mask")
###Output
_____no_output_____
###Markdown
Let's check the shape of `reconstruction_mask`:
###Code
reconstruction_mask
###Output
_____no_output_____
###Markdown
Let's compare this to the shape of `caps2_output`:
###Code
caps2_output
###Output
_____no_output_____
###Markdown
Mmh, its shape is (_batch size_, 1, 10, 16, 1). We want to multiply it by the `reconstruction_mask`, but the shape of the `reconstruction_mask` is (_batch size_, 10). We must reshape it to (_batch size_, 1, 10, 1, 1) to make multiplication possible:
###Code
reconstruction_mask_reshaped = tf.reshape(
reconstruction_mask, [-1, 1, caps2_n_caps, 1, 1],
name="reconstruction_mask_reshaped")
###Output
_____no_output_____
###Markdown
At last! We can apply the mask:
###Code
caps2_output_masked = tf.multiply(
caps2_output, reconstruction_mask_reshaped,
name="caps2_output_masked")
caps2_output_masked
###Output
_____no_output_____
###Markdown
One last reshape operation to flatten the decoder's inputs:
###Code
decoder_input = tf.reshape(caps2_output_masked,
[-1, caps2_n_caps * caps2_n_dims],
name="decoder_input")
###Output
_____no_output_____
###Markdown
This gives us an array of shape (_batch size_, 160):
###Code
decoder_input
###Output
_____no_output_____
###Markdown
Decoder Now let's build the decoder. It's quite simple: two dense (fully connected) ReLU layers followed by a dense output sigmoid layer:
###Code
n_hidden1 = 512
n_hidden2 = 1024
n_output = 28 * 28
with tf.name_scope("decoder"):
hidden1 = tf.layers.dense(decoder_input, n_hidden1,
activation=tf.nn.relu,
name="hidden1")
hidden2 = tf.layers.dense(hidden1, n_hidden2,
activation=tf.nn.relu,
name="hidden2")
decoder_output = tf.layers.dense(hidden2, n_output,
activation=tf.nn.sigmoid,
name="decoder_output")
###Output
_____no_output_____
###Markdown
Reconstruction Loss Now let's compute the reconstruction loss. It is just the squared difference between the input image and the reconstructed image:
###Code
X_flat = tf.reshape(X, [-1, n_output], name="X_flat")
squared_difference = tf.square(X_flat - decoder_output,
name="squared_difference")
reconstruction_loss = tf.reduce_mean(squared_difference,
name="reconstruction_loss")
###Output
_____no_output_____
###Markdown
Final Loss The final loss is the sum of the margin loss and the reconstruction loss (scaled down by a factor of 0.0005 to ensure the margin loss dominates training):
###Code
alpha = 0.0005
loss = tf.add(margin_loss, alpha * reconstruction_loss, name="loss")
###Output
_____no_output_____
###Markdown
Final Touches Accuracy To measure our model's accuracy, we need to count the number of instances that are properly classified. For this, we can simply compare `y` and `y_pred`, convert the boolean value to a float32 (0.0 for False, 1.0 for True), and compute the mean over all the instances:
###Code
correct = tf.equal(y, y_pred, name="correct")
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy")
###Output
_____no_output_____
###Markdown
Training Operations The paper mentions that the authors used the Adam optimizer with TensorFlow's default parameters:
###Code
optimizer = tf.train.AdamOptimizer()
training_op = optimizer.minimize(loss, name="training_op")
###Output
_____no_output_____
###Markdown
Init and Saver And let's add the usual variable initializer, as well as a `Saver`:
###Code
init = tf.global_variables_initializer()
saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
And... we're done with the construction phase! Please take a moment to celebrate. :) Training Training our capsule network is pretty standard. For simplicity, we won't do any fancy hyperparameter tuning, dropout or anything, we will just run the training operation over and over again, displaying the loss, and at the end of each epoch, measure the accuracy on the validation set, display it, and save the model if the validation loss is the lowest seen found so far (this is a basic way to implement early stopping, without actually stopping). Hopefully the code should be self-explanatory, but here are a few details to note:* if a checkpoint file exists, it will be restored (this makes it possible to interrupt training, then restart it later from the last checkpoint),* we must not forget to feed `mask_with_labels=True` during training,* during testing, we let `mask_with_labels` default to `False` (but we still feed the labels since they are required to compute the accuracy),* the images loaded _via_ `mnist.train.next_batch()` are represented as `float32` arrays of shape \[784\], but the input placeholder `X` expects a `float32` array of shape \[28, 28, 1\], so we must reshape the images before we feed them to our model,* we evaluate the model's loss and accuracy on the full validation set (5,000 instances). To view progress and support systems that don't have a lot of RAM, the code evaluates the loss and accuracy on one batch at a time, and computes the mean loss and mean accuracy at the end.*Warning*: if you don't have a GPU, training will take a very long time (at least a few hours). With a GPU, it should take just a few minutes per epoch (e.g., 6 minutes on an NVidia GeForce GTX 1080Ti).
###Code
n_epochs = 10
batch_size = 50
restore_checkpoint = True
n_iterations_per_epoch = mnist.train.num_examples // batch_size
n_iterations_validation = mnist.validation.num_examples // batch_size
best_loss_val = np.infty
checkpoint_path = "./my_capsule_network"
with tf.Session() as sess:
if restore_checkpoint and tf.train.checkpoint_exists(checkpoint_path):
saver.restore(sess, checkpoint_path)
else:
init.run()
for epoch in range(n_epochs):
for iteration in range(1, n_iterations_per_epoch + 1):
X_batch, y_batch = mnist.train.next_batch(batch_size)
# Run the training operation and measure the loss:
_, loss_train = sess.run(
[training_op, loss],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch,
mask_with_labels: True})
print("\rIteration: {}/{} ({:.1f}%) Loss: {:.5f}".format(
iteration, n_iterations_per_epoch,
iteration * 100 / n_iterations_per_epoch,
loss_train),
end="")
# At the end of each epoch,
# measure the validation loss and accuracy:
loss_vals = []
acc_vals = []
for iteration in range(1, n_iterations_validation + 1):
X_batch, y_batch = mnist.validation.next_batch(batch_size)
loss_val, acc_val = sess.run(
[loss, accuracy],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch})
loss_vals.append(loss_val)
acc_vals.append(acc_val)
print("\rEvaluating the model: {}/{} ({:.1f}%)".format(
iteration, n_iterations_validation,
iteration * 100 / n_iterations_validation),
end=" " * 10)
loss_val = np.mean(loss_vals)
acc_val = np.mean(acc_vals)
print("\rEpoch: {} Val accuracy: {:.4f}% Loss: {:.6f}{}".format(
epoch + 1, acc_val * 100, loss_val,
" (improved)" if loss_val < best_loss_val else ""))
# And save the model if it improved:
if loss_val < best_loss_val:
save_path = saver.save(sess, checkpoint_path)
best_loss_val = loss_val
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
Epoch: 1 Val accuracy: 99.4400% Loss: 0.007998 (improved)
Epoch: 2 Val accuracy: 99.3400% Loss: 0.007959 (improved)
Epoch: 3 Val accuracy: 99.4000% Loss: 0.007436 (improved)
Epoch: 4 Val accuracy: 99.4000% Loss: 0.007568
Epoch: 5 Val accuracy: 99.2600% Loss: 0.007464
Epoch: 6 Val accuracy: 99.4800% Loss: 0.006631 (improved)
Epoch: 7 Val accuracy: 99.4000% Loss: 0.006915
Epoch: 8 Val accuracy: 99.4200% Loss: 0.006735
Epoch: 9 Val accuracy: 99.2200% Loss: 0.007709
Epoch: 10 Val accuracy: 99.4000% Loss: 0.007083
###Markdown
Training is finished, we reached over 99.4% accuracy on the validation set after just 5 epochs, things are looking good. Now let's evaluate the model on the test set. Evaluation
###Code
n_iterations_test = mnist.test.num_examples // batch_size
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
loss_tests = []
acc_tests = []
for iteration in range(1, n_iterations_test + 1):
X_batch, y_batch = mnist.test.next_batch(batch_size)
loss_test, acc_test = sess.run(
[loss, accuracy],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch})
loss_tests.append(loss_test)
acc_tests.append(acc_test)
print("\rEvaluating the model: {}/{} ({:.1f}%)".format(
iteration, n_iterations_test,
iteration * 100 / n_iterations_test),
end=" " * 10)
loss_test = np.mean(loss_tests)
acc_test = np.mean(acc_tests)
print("\rFinal test accuracy: {:.4f}% Loss: {:.6f}".format(
acc_test * 100, loss_test))
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
Final test accuracy: 99.5300% Loss: 0.006631
###Markdown
We reach 99.53% accuracy on the test set. Pretty nice. :) Predictions Now let's make some predictions! We first fix a few images from the test set, then we start a session, restore the trained model, evaluate `caps2_output` to get the capsule network's output vectors, `decoder_output` to get the reconstructions, and `y_pred` to get the class predictions:
###Code
n_samples = 5
sample_images = mnist.test.images[:n_samples].reshape([-1, 28, 28, 1])
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
caps2_output_value, decoder_output_value, y_pred_value = sess.run(
[caps2_output, decoder_output, y_pred],
feed_dict={X: sample_images,
y: np.array([], dtype=np.int64)})
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
###Markdown
Note: we feed `y` with an empty array, but TensorFlow will not use it, as explained earlier. And now let's plot the images and their labels, followed by the corresponding reconstructions and predictions:
###Code
sample_images = sample_images.reshape(-1, 28, 28)
reconstructions = decoder_output_value.reshape([-1, 28, 28])
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
plt.imshow(sample_images[index], cmap="binary")
plt.title("Label:" + str(mnist.test.labels[index]))
plt.axis("off")
plt.show()
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
plt.title("Predicted:" + str(y_pred_value[index]))
plt.imshow(reconstructions[index], cmap="binary")
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
The predictions are all correct, and the reconstructions look great. Hurray! Interpreting the Output Vectors Let's tweak the output vectors to see what their pose parameters represent. First, let's check the shape of the `cap2_output_value` NumPy array:
###Code
caps2_output_value.shape
###Output
_____no_output_____
###Markdown
Let's create a function that will tweak each of the 16 pose parameters (dimensions) in all output vectors. Each tweaked output vector will be identical to the original output vector, except that one of its pose parameters will be incremented by a value varying from -0.5 to 0.5. By default there will be 11 steps (-0.5, -0.4, ..., +0.4, +0.5). This function will return an array of shape (_tweaked pose parameters_=16, _steps_=11, _batch size_=5, 1, 10, 16, 1):
###Code
def tweak_pose_parameters(output_vectors, min=-0.5, max=0.5, n_steps=11):
steps = np.linspace(min, max, n_steps) # -0.25, -0.15, ..., +0.25
pose_parameters = np.arange(caps2_n_dims) # 0, 1, ..., 15
tweaks = np.zeros([caps2_n_dims, n_steps, 1, 1, 1, caps2_n_dims, 1])
tweaks[pose_parameters, :, 0, 0, 0, pose_parameters, 0] = steps
output_vectors_expanded = output_vectors[np.newaxis, np.newaxis]
return tweaks + output_vectors_expanded
###Output
_____no_output_____
###Markdown
Let's compute all the tweaked output vectors and reshape the result to (_parameters_×_steps_×_instances_, 1, 10, 16, 1) so we can feed the array to the decoder:
###Code
n_steps = 11
tweaked_vectors = tweak_pose_parameters(caps2_output_value, n_steps=n_steps)
tweaked_vectors_reshaped = tweaked_vectors.reshape(
[-1, 1, caps2_n_caps, caps2_n_dims, 1])
###Output
_____no_output_____
###Markdown
Now let's feed these tweaked output vectors to the decoder and get the reconstructions it produces:
###Code
tweak_labels = np.tile(mnist.test.labels[:n_samples], caps2_n_dims * n_steps)
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
decoder_output_value = sess.run(
decoder_output,
feed_dict={caps2_output: tweaked_vectors_reshaped,
mask_with_labels: True,
y: tweak_labels})
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
###Markdown
Let's reshape the decoder's output so we can easily iterate on the output dimension, the tweak steps, and the instances:
###Code
tweak_reconstructions = decoder_output_value.reshape(
[caps2_n_dims, n_steps, n_samples, 28, 28])
###Output
_____no_output_____
###Markdown
Lastly, let's plot all the reconstructions, for the first 3 output dimensions, for each tweaking step (column) and each digit (row):
###Code
for dim in range(3):
print("Tweaking output dimension #{}".format(dim))
plt.figure(figsize=(n_steps / 1.2, n_samples / 1.5))
for row in range(n_samples):
for col in range(n_steps):
plt.subplot(n_samples, n_steps, row * n_steps + col + 1)
plt.imshow(tweak_reconstructions[dim, col, row], cmap="binary")
plt.axis("off")
plt.show()
###Output
Tweaking output dimension #0
###Markdown
Capsule Networks (CapsNets) Based on the paper: [Dynamic Routing Between Capsules](https://arxiv.org/abs/1710.09829), by Sara Sabour, Nicholas Frosst and Geoffrey E. Hinton (NIPS 2017). Inspired in part from Huadong Liao's implementation: [CapsNet-TensorFlow](https://github.com/naturomics/CapsNet-Tensorflow). Introduction Watch [this video](https://youtu.be/pPN8d0E3900) to understand the key ideas behind Capsule Networks:
###Code
from IPython.display import HTML
HTML("""<iframe width="560" height="315" src="https://www.youtube.com/embed/pPN8d0E3900" frameborder="0" allowfullscreen></iframe>""")
###Output
_____no_output_____
###Markdown
You may also want to watch [this video](https://youtu.be/2Kawrd5szHE), which presents the main difficulties in this notebook:
###Code
HTML("""<iframe width="560" height="315" src="https://www.youtube.com/embed/2Kawrd5szHE" frameborder="0" allowfullscreen></iframe>""")
###Output
_____no_output_____
###Markdown
Imports To support both Python 2 and Python 3:
###Code
from __future__ import division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
To plot pretty figures:
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
We will need NumPy and TensorFlow:
###Code
import numpy as np
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Reproducibility Let's reset the default graph, in case you re-run this notebook without restarting the kernel:
###Code
tf.reset_default_graph()
###Output
_____no_output_____
###Markdown
Let's set the random seeds so that this notebook always produces the same output:
###Code
np.random.seed(42)
tf.set_random_seed(42)
###Output
_____no_output_____
###Markdown
Load MNIST Yes, I know, it's MNIST again. But hopefully this powerful idea will work as well on larger datasets, time will tell.
###Code
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
###Output
Extracting /tmp/data/train-images-idx3-ubyte.gz
Extracting /tmp/data/train-labels-idx1-ubyte.gz
Extracting /tmp/data/t10k-images-idx3-ubyte.gz
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
###Markdown
Let's look at what these hand-written digit images look like:
###Code
n_samples = 5
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
sample_image = mnist.train.images[index].reshape(28, 28)
plt.imshow(sample_image, cmap="binary")
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
And these are the corresponding labels:
###Code
mnist.train.labels[:n_samples]
###Output
_____no_output_____
###Markdown
Now let's build a Capsule Network to classify these images. Here's the overall architecture, enjoy the ASCII art! ;-)Note: for readability, I left out two arrows: Labels → Mask, and Input Images → Reconstruction Loss. ``` Loss ↑ ┌─────────┴─────────┐ Labels → Margin Loss Reconstruction Loss ↑ ↑ Length Decoder ↑ ↑ Digit Capsules ────Mask────┘ ↖↑↗ ↖↑↗ ↖↑↗ Primary Capsules ↑ Input Images``` We are going to build the graph starting from the bottom layer, and gradually move up, left side first. Let's go! Input Images Let's start by creating a placeholder for the input images (28×28 pixels, 1 color channel = grayscale).
###Code
X = tf.placeholder(shape=[None, 28, 28, 1], dtype=tf.float32, name="X")
###Output
_____no_output_____
###Markdown
Primary Capsules The first layer will be composed of 32 maps of 6×6 capsules each, where each capsule will output an 8D activation vector:
###Code
caps1_n_maps = 32
caps1_n_caps = caps1_n_maps * 6 * 6 # 1152 primary capsules
caps1_n_dims = 8
###Output
_____no_output_____
###Markdown
To compute their outputs, we first apply two regular convolutional layers:
###Code
conv1_params = {
"filters": 256,
"kernel_size": 9,
"strides": 1,
"padding": "valid",
"activation": tf.nn.relu,
}
conv2_params = {
"filters": caps1_n_maps * caps1_n_dims, # 256 convolutional filters
"kernel_size": 9,
"strides": 2,
"padding": "valid",
"activation": tf.nn.relu
}
conv1 = tf.layers.conv2d(X, name="conv1", **conv1_params)
conv2 = tf.layers.conv2d(conv1, name="conv2", **conv2_params)
###Output
_____no_output_____
###Markdown
Note: since we used a kernel size of 9 and no padding (for some reason, that's what `"valid"` means), the image shrunk by 9-1=8 pixels after each convolutional layer (28×28 to 20×20, then 20×20 to 12×12), and since we used a stride of 2 in the second convolutional layer, the image size was divided by 2. This is how we end up with 6×6 feature maps. Next, we reshape the output to get a bunch of 8D vectors representing the outputs of the primary capsules. The output of `conv2` is an array containing 32×8=256 feature maps for each instance, where each feature map is 6×6. So the shape of this output is (_batch size_, 6, 6, 256). We want to chop the 256 into 32 vectors of 8 dimensions each. We could do this by reshaping to (_batch size_, 6, 6, 32, 8). However, since this first capsule layer will be fully connected to the next capsule layer, we can simply flatten the 6×6 grids. This means we just need to reshape to (_batch size_, 6×6×32, 8).
###Code
caps1_raw = tf.reshape(conv2, [-1, caps1_n_caps, caps1_n_dims],
name="caps1_raw")
###Output
_____no_output_____
###Markdown
Now we need to squash these vectors. Let's define the `squash()` function, based on equation (1) from the paper:$\operatorname{squash}(\mathbf{s}) = \dfrac{\|\mathbf{s}\|^2}{1 + \|\mathbf{s}\|^2} \dfrac{\mathbf{s}}{\|\mathbf{s}\|}$The `squash()` function will squash all vectors in the given array, along the given axis (by default, the last axis).**Caution**, a nasty bug is waiting to bite you: the derivative of $\|\mathbf{s}\|$ is undefined when $\|\mathbf{s}\|=0$, so we can't just use `tf.norm()`, or else it will blow up during training: if a vector is zero, the gradients will be `nan`, so when the optimizer updates the variables, they will also become `nan`, and from then on you will be stuck in `nan` land. The solution is to implement the norm manually by computing the square root of the sum of squares plus a tiny epsilon value: $\|\mathbf{s}\| \approx \sqrt{\sum\limits_i{{s_i}^2}\,\,+ \epsilon}$.
###Code
def squash(s, axis=-1, epsilon=1e-7, name=None):
with tf.name_scope(name, default_name="squash"):
squared_norm = tf.reduce_sum(tf.square(s), axis=axis,
keep_dims=True)
safe_norm = tf.sqrt(squared_norm + epsilon)
squash_factor = squared_norm / (1. + squared_norm)
unit_vector = s / safe_norm
return squash_factor * unit_vector
###Output
_____no_output_____
###Markdown
Now let's apply this function to get the output $\mathbf{u}_i$ of each primary capsules $i$ :
###Code
caps1_output = squash(caps1_raw, name="caps1_output")
###Output
_____no_output_____
###Markdown
Great! We have the output of the first capsule layer. It wasn't too hard, was it? However, computing the next layer is where the fun really begins. Digit Capsules To compute the output of the digit capsules, we must first compute the predicted output vectors (one for each primary / digit capsule pair). Then we can run the routing by agreement algorithm. Compute the Predicted Output Vectors The digit capsule layer contains 10 capsules (one for each digit) of 16 dimensions each:
###Code
caps2_n_caps = 10
caps2_n_dims = 16
###Output
_____no_output_____
###Markdown
For each capsule $i$ in the first layer, we want to predict the output of every capsule $j$ in the second layer. For this, we will need a transformation matrix $\mathbf{W}_{i,j}$ (one for each pair of capsules ($i$, $j$)), then we can compute the predicted output $\hat{\mathbf{u}}_{j|i} = \mathbf{W}_{i,j} \, \mathbf{u}_i$ (equation (2)-right in the paper). Since we want to transform an 8D vector into a 16D vector, each transformation matrix $\mathbf{W}_{i,j}$ must have a shape of (16, 8). To compute $\hat{\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$), we will use a nice feature of the `tf.matmul()` function: you probably know that it lets you multiply two matrices, but you may not know that it also lets you multiply higher dimensional arrays. It treats the arrays as arrays of matrices, and it performs itemwise matrix multiplication. For example, suppose you have two 4D arrays, each containing a 2×3 grid of matrices. The first contains matrices $\mathbf{A}, \mathbf{B}, \mathbf{C}, \mathbf{D}, \mathbf{E}, \mathbf{F}$ and the second contains matrices $\mathbf{G}, \mathbf{H}, \mathbf{I}, \mathbf{J}, \mathbf{K}, \mathbf{L}$. If you multiply these two 4D arrays using the `tf.matmul()` function, this is what you get:$\pmatrix{\mathbf{A} & \mathbf{B} & \mathbf{C} \\\mathbf{D} & \mathbf{E} & \mathbf{F}} \times\pmatrix{\mathbf{G} & \mathbf{H} & \mathbf{I} \\\mathbf{J} & \mathbf{K} & \mathbf{L}} = \pmatrix{\mathbf{AG} & \mathbf{BH} & \mathbf{CI} \\\mathbf{DJ} & \mathbf{EK} & \mathbf{FL}}$ We can apply this function to compute $\hat{\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$) like this (recall that there are 6×6×32=1152 capsules in the first layer, and 10 in the second layer):$\pmatrix{ \mathbf{W}_{1,1} & \mathbf{W}_{1,2} & \cdots & \mathbf{W}_{1,10} \\ \mathbf{W}_{2,1} & \mathbf{W}_{2,2} & \cdots & \mathbf{W}_{2,10} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{W}_{1152,1} & \mathbf{W}_{1152,2} & \cdots & \mathbf{W}_{1152,10}} \times\pmatrix{ \mathbf{u}_1 & \mathbf{u}_1 & \cdots & \mathbf{u}_1 \\ \mathbf{u}_2 & \mathbf{u}_2 & \cdots & \mathbf{u}_2 \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{u}_{1152} & \mathbf{u}_{1152} & \cdots & \mathbf{u}_{1152}}=\pmatrix{\hat{\mathbf{u}}_{1|1} & \hat{\mathbf{u}}_{2|1} & \cdots & \hat{\mathbf{u}}_{10|1} \\\hat{\mathbf{u}}_{1|2} & \hat{\mathbf{u}}_{2|2} & \cdots & \hat{\mathbf{u}}_{10|2} \\\vdots & \vdots & \ddots & \vdots \\\hat{\mathbf{u}}_{1|1152} & \hat{\mathbf{u}}_{2|1152} & \cdots & \hat{\mathbf{u}}_{10|1152}}$ The shape of the first array is (1152, 10, 16, 8), and the shape of the second array is (1152, 10, 8, 1). Note that the second array must contain 10 identical copies of the vectors $\mathbf{u}_1$ to $\mathbf{u}_{1152}$. To create this array, we will use the handy `tf.tile()` function, which lets you create an array containing many copies of a base array, tiled in any way you want. Oh, wait a second! We forgot one dimension: _batch size_. Say we feed 50 images to the capsule network, it will make predictions for these 50 images simultaneously. So the shape of the first array must be (50, 1152, 10, 16, 8), and the shape of the second array must be (50, 1152, 10, 8, 1). The first layer capsules actually already output predictions for all 50 images, so the second array will be fine, but for the first array, we will need to use `tf.tile()` to have 50 copies of the transformation matrices. Okay, let's start by creating a trainable variable of shape (1, 1152, 10, 16, 8) that will hold all the transformation matrices. The first dimension of size 1 will make this array easy to tile. We initialize this variable randomly using a normal distribution with a standard deviation to 0.01.
###Code
init_sigma = 0.01
W_init = tf.random_normal(
shape=(1, caps1_n_caps, caps2_n_caps, caps2_n_dims, caps1_n_dims),
stddev=init_sigma, dtype=tf.float32, name="W_init")
W = tf.Variable(W_init, name="W")
###Output
_____no_output_____
###Markdown
Now we can create the first array by repeating `W` once per instance:
###Code
batch_size = tf.shape(X)[0]
W_tiled = tf.tile(W, [batch_size, 1, 1, 1, 1], name="W_tiled")
###Output
_____no_output_____
###Markdown
That's it! On to the second array, now. As discussed earlier, we need to create an array of shape (_batch size_, 1152, 10, 8, 1), containing the output of the first layer capsules, repeated 10 times (once per digit, along the third dimension, which is axis=2). The `caps1_output` array has a shape of (_batch size_, 1152, 8), so we first need to expand it twice, to get an array of shape (_batch size_, 1152, 1, 8, 1), then we can repeat it 10 times along the third dimension:
###Code
caps1_output_expanded = tf.expand_dims(caps1_output, -1,
name="caps1_output_expanded")
caps1_output_tile = tf.expand_dims(caps1_output_expanded, 2,
name="caps1_output_tile")
caps1_output_tiled = tf.tile(caps1_output_tile, [1, 1, caps2_n_caps, 1, 1],
name="caps1_output_tiled")
###Output
_____no_output_____
###Markdown
Let's check the shape of the first array:
###Code
W_tiled
###Output
_____no_output_____
###Markdown
Good, and now the second:
###Code
caps1_output_tiled
###Output
_____no_output_____
###Markdown
Yes! Now, to get all the predicted output vectors $\hat{\mathbf{u}}_{j|i}$, we just need to multiply these two arrays using `tf.matmul()`, as explained earlier:
###Code
caps2_predicted = tf.matmul(W_tiled, caps1_output_tiled,
name="caps2_predicted")
###Output
_____no_output_____
###Markdown
Let's check the shape:
###Code
caps2_predicted
###Output
_____no_output_____
###Markdown
Perfect, for each instance in the batch (we don't know the batch size yet, hence the "?") and for each pair of first and second layer capsules (1152×10) we have a 16D predicted output column vector (16×1). We're ready to apply the routing by agreement algorithm! Routing by agreement First let's initialize the raw routing weights $b_{i,j}$ to zero:
###Code
raw_weights = tf.zeros([batch_size, caps1_n_caps, caps2_n_caps, 1, 1],
dtype=np.float32, name="raw_weights")
###Output
_____no_output_____
###Markdown
We will see why we need the last two dimensions of size 1 in a minute. Round 1 First, let's apply the softmax function to compute the routing weights, $\mathbf{c}_{i} = \operatorname{softmax}(\mathbf{b}_i)$ (equation (3) in the paper):
###Code
routing_weights = tf.nn.softmax(raw_weights, dim=2, name="routing_weights")
###Output
_____no_output_____
###Markdown
Now let's compute the weighted sum of all the predicted output vectors for each second-layer capsule, $\mathbf{s}_j = \sum\limits_{i}{c_{i,j}\hat{\mathbf{u}}_{j|i}}$ (equation (2)-left in the paper):
###Code
weighted_predictions = tf.multiply(routing_weights, caps2_predicted,
name="weighted_predictions")
weighted_sum = tf.reduce_sum(weighted_predictions, axis=1, keep_dims=True,
name="weighted_sum")
###Output
_____no_output_____
###Markdown
There are a couple important details to note here:* To perform elementwise matrix multiplication (also called the Hadamard product, noted $\circ$), we use the `tf.multiply()` function. It requires `routing_weights` and `caps2_predicted` to have the same rank, which is why we added two extra dimensions of size 1 to `routing_weights`, earlier.* The shape of `routing_weights` is (_batch size_, 1152, 10, 1, 1) while the shape of `caps2_predicted` is (_batch size_, 1152, 10, 16, 1). Since they don't match on the fourth dimension (1 _vs_ 16), `tf.multiply()` automatically _broadcasts_ the `routing_weights` 16 times along that dimension. If you are not familiar with broadcasting, a simple example might help: $ \pmatrix{1 & 2 & 3 \\ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000} = \pmatrix{1 & 2 & 3 \\ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000 \\ 10 & 100 & 1000} = \pmatrix{10 & 200 & 3000 \\ 40 & 500 & 6000} $ And finally, let's apply the squash function to get the outputs of the second layer capsules at the end of the first iteration of the routing by agreement algorithm, $\mathbf{v}_j = \operatorname{squash}(\mathbf{s}_j)$ :
###Code
caps2_output_round_1 = squash(weighted_sum, axis=-2,
name="caps2_output_round_1")
caps2_output_round_1
###Output
_____no_output_____
###Markdown
Good! We have ten 16D output vectors for each instance, as expected. Round 2 First, let's measure how close each predicted vector $\hat{\mathbf{u}}_{j|i}$ is to the actual output vector $\mathbf{v}_j$ by computing their scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$. * Quick math reminder: if $\vec{a}$ and $\vec{b}$ are two vectors of equal length, and $\mathbf{a}$ and $\mathbf{b}$ are their corresponding column vectors (i.e., matrices with a single column), then $\mathbf{a}^T \mathbf{b}$ (i.e., the matrix multiplication of the transpose of $\mathbf{a}$, and $\mathbf{b}$) is a 1×1 matrix containing the scalar product of the two vectors $\vec{a}\cdot\vec{b}$. In Machine Learning, we generally represent vectors as column vectors, so when we talk about computing the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$, this actually means computing ${\hat{\mathbf{u}}_{j|i}}^T \mathbf{v}_j$. Since we need to compute the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ for each instance, and for each pair of first and second level capsules $(i, j)$, we will once again take advantage of the fact that `tf.matmul()` can multiply many matrices simultaneously. This will require playing around with `tf.tile()` to get all dimensions to match (except for the last 2), just like we did earlier. So let's look at the shape of `caps2_predicted`, which holds all the predicted output vectors $\hat{\mathbf{u}}_{j|i}$ for each instance and each pair of capsules:
###Code
caps2_predicted
###Output
_____no_output_____
###Markdown
And now let's look at the shape of `caps2_output_round_1`, which holds 10 outputs vectors of 16D each, for each instance:
###Code
caps2_output_round_1
###Output
_____no_output_____
###Markdown
To get these shapes to match, we just need to tile the `caps2_output_round_1` array 1152 times (once per primary capsule) along the second dimension:
###Code
caps2_output_round_1_tiled = tf.tile(
caps2_output_round_1, [1, caps1_n_caps, 1, 1, 1],
name="caps2_output_round_1_tiled")
###Output
_____no_output_____
###Markdown
And now we are ready to call `tf.matmul()` (note that we must tell it to transpose the matrices in the first array, to get ${\hat{\mathbf{u}}_{j|i}}^T$ instead of $\hat{\mathbf{u}}_{j|i}$):
###Code
agreement = tf.matmul(caps2_predicted, caps2_output_round_1_tiled,
transpose_a=True, name="agreement")
###Output
_____no_output_____
###Markdown
We can now update the raw routing weights $b_{i,j}$ by simply adding the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ we just computed: $b_{i,j} \gets b_{i,j} + \hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ (see Procedure 1, step 7, in the paper).
###Code
raw_weights_round_2 = tf.add(raw_weights, agreement,
name="raw_weights_round_2")
###Output
_____no_output_____
###Markdown
The rest of round 2 is the same as in round 1:
###Code
routing_weights_round_2 = tf.nn.softmax(raw_weights_round_2,
dim=2,
name="routing_weights_round_2")
weighted_predictions_round_2 = tf.multiply(routing_weights_round_2,
caps2_predicted,
name="weighted_predictions_round_2")
weighted_sum_round_2 = tf.reduce_sum(weighted_predictions_round_2,
axis=1, keep_dims=True,
name="weighted_sum_round_2")
caps2_output_round_2 = squash(weighted_sum_round_2,
axis=-2,
name="caps2_output_round_2")
###Output
_____no_output_____
###Markdown
We could go on for a few more rounds, by repeating exactly the same steps as in round 2, but to keep things short, we will stop here:
###Code
caps2_output = caps2_output_round_2
###Output
_____no_output_____
###Markdown
Static or Dynamic Loop? In the code above, we created different operations in the TensorFlow graph for each round of the routing by agreement algorithm. In other words, it's a static loop.Sure, instead of copy/pasting the code several times, we could have written a `for` loop in Python, but this would not change the fact that the graph would end up containing different operations for each routing iteration. It's actually okay since we generally want less than 5 routing iterations, so the graph won't grow too big.However, you may prefer to implement the routing loop within the TensorFlow graph itself rather than using a Python `for` loop. To do this, you would need to use TensorFlow's `tf.while_loop()` function. This way, all routing iterations would reuse the same operations in the graph, it would be a dynamic loop.For example, here is how to build a small loop that computes the sum of squares from 1 to 100:
###Code
def condition(input, counter):
return tf.less(counter, 100)
def loop_body(input, counter):
output = tf.add(input, tf.square(counter))
return output, tf.add(counter, 1)
with tf.name_scope("compute_sum_of_squares"):
counter = tf.constant(1)
sum_of_squares = tf.constant(0)
result = tf.while_loop(condition, loop_body, [sum_of_squares, counter])
with tf.Session() as sess:
print(sess.run(result))
###Output
(328350, 100)
###Markdown
As you can see, the `tf.while_loop()` function expects the loop condition and body to be provided _via_ two functions. These functions will be called only once by TensorFlow, during the graph construction phase, _not_ while executing the graph. The `tf.while_loop()` function stitches together the graph fragments created by `condition()` and `loop_body()` with some additional operations to create the loop.Also note that during training, TensorFlow will automagically handle backpropogation through the loop, so you don't need to worry about that. Of course, we could have used this one-liner instead! ;-)
###Code
sum([i**2 for i in range(1, 100 + 1)])
###Output
_____no_output_____
###Markdown
Joke aside, apart from reducing the graph size, using a dynamic loop instead of a static loop can help reduce how much GPU RAM you use (if you are using a GPU). Indeed, if you set `swap_memory=True` when calling the `tf.while_loop()` function, TensorFlow will automatically check GPU RAM usage at each loop iteration, and it will take care of swapping memory between the GPU and the CPU when needed. Since CPU memory is much cheaper and abundant than GPU RAM, this can really make a big difference. Estimated Class Probabilities (Length) The lengths of the output vectors represent the class probabilities, so we could just use `tf.norm()` to compute them, but as we saw when discussing the squash function, it would be risky, so instead let's create our own `safe_norm()` function:
###Code
def safe_norm(s, axis=-1, epsilon=1e-7, keep_dims=False, name=None):
with tf.name_scope(name, default_name="safe_norm"):
squared_norm = tf.reduce_sum(tf.square(s), axis=axis,
keep_dims=keep_dims)
return tf.sqrt(squared_norm + epsilon)
y_proba = safe_norm(caps2_output, axis=-2, name="y_proba")
###Output
_____no_output_____
###Markdown
To predict the class of each instance, we can just select the one with the highest estimated probability. To do this, let's start by finding its index using `tf.argmax()`:
###Code
y_proba_argmax = tf.argmax(y_proba, axis=2, name="y_proba")
###Output
_____no_output_____
###Markdown
Let's look at the shape of `y_proba_argmax`:
###Code
y_proba_argmax
###Output
_____no_output_____
###Markdown
That's what we wanted: for each instance, we now have the index of the longest output vector. Let's get rid of the last two dimensions by using `tf.squeeze()` which removes dimensions of size 1. This gives us the capsule network's predicted class for each instance:
###Code
y_pred = tf.squeeze(y_proba_argmax, axis=[1,2], name="y_pred")
y_pred
###Output
_____no_output_____
###Markdown
Okay, we are now ready to define the training operations, starting with the losses. Labels First, we will need a placeholder for the labels:
###Code
y = tf.placeholder(shape=[None], dtype=tf.int64, name="y")
###Output
_____no_output_____
###Markdown
Margin loss The paper uses a special margin loss to make it possible to detect two or more different digits in each image:$ L_k = T_k \max(0, m^{+} - \|\mathbf{v}_k\|)^2 + \lambda (1 - T_k) \max(0, \|\mathbf{v}_k\| - m^{-})^2$* $T_k$ is equal to 1 if the digit of class $k$ is present, or 0 otherwise.* In the paper, $m^{+} = 0.9$, $m^{-} = 0.1$ and $\lambda = 0.5$.* Note that there was an error in the video (at 15:47): the max operations are squared, not the norms. Sorry about that.
###Code
m_plus = 0.9
m_minus = 0.1
lambda_ = 0.5
###Output
_____no_output_____
###Markdown
Since `y` will contain the digit classes, from 0 to 9, to get $T_k$ for every instance and every class, we can just use the `tf.one_hot()` function:
###Code
T = tf.one_hot(y, depth=caps2_n_caps, name="T")
###Output
_____no_output_____
###Markdown
A small example should make it clear what this does:
###Code
with tf.Session():
print(T.eval(feed_dict={y: np.array([0, 1, 2, 3, 9])}))
###Output
[[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]
###Markdown
Now let's compute the norm of the output vector for each output capsule and each instance. First, let's verify the shape of `caps2_output`:
###Code
caps2_output
###Output
_____no_output_____
###Markdown
The 16D output vectors are in the second to last dimension, so let's use the `safe_norm()` function with `axis=-2`:
###Code
caps2_output_norm = safe_norm(caps2_output, axis=-2, keep_dims=True,
name="caps2_output_norm")
###Output
_____no_output_____
###Markdown
Now let's compute $\max(0, m^{+} - \|\mathbf{v}_k\|)^2$, and reshape the result to get a simple matrix of shape (_batch size_, 10):
###Code
present_error_raw = tf.square(tf.maximum(0., m_plus - caps2_output_norm),
name="present_error_raw")
present_error = tf.reshape(present_error_raw, shape=(-1, 10),
name="present_error")
###Output
_____no_output_____
###Markdown
Next let's compute $\max(0, \|\mathbf{v}_k\| - m^{-})^2$ and reshape it:
###Code
absent_error_raw = tf.square(tf.maximum(0., caps2_output_norm - m_minus),
name="absent_error_raw")
absent_error = tf.reshape(absent_error_raw, shape=(-1, 10),
name="absent_error")
###Output
_____no_output_____
###Markdown
We are ready to compute the loss for each instance and each digit:
###Code
L = tf.add(T * present_error, lambda_ * (1.0 - T) * absent_error,
name="L")
###Output
_____no_output_____
###Markdown
Now we can sum the digit losses for each instance ($L_0 + L_1 + \cdots + L_9$), and compute the mean over all instances. This gives us the final margin loss:
###Code
margin_loss = tf.reduce_mean(tf.reduce_sum(L, axis=1), name="margin_loss")
###Output
_____no_output_____
###Markdown
Reconstruction Now let's add a decoder network on top of the capsule network. It is a regular 3-layer fully connected neural network which will learn to reconstruct the input images based on the output of the capsule network. This will force the capsule network to preserve all the information required to reconstruct the digits, across the whole network. This constraint regularizes the model: it reduces the risk of overfitting the training set, and it helps generalize to new digits. Mask The paper mentions that during training, instead of sending all the outputs of the capsule network to the decoder network, we must send only the output vector of the capsule that corresponds to the target digit. All the other output vectors must be masked out. At inference time, we must mask all output vectors except for the longest one, i.e., the one that corresponds to the predicted digit. You can see this in the paper's figure 2 (at 18:15 in the video): all output vectors are masked out, except for the reconstruction target's output vector. We need a placeholder to tell TensorFlow whether we want to mask the output vectors based on the labels (`True`) or on the predictions (`False`, the default):
###Code
mask_with_labels = tf.placeholder_with_default(False, shape=(),
name="mask_with_labels")
###Output
_____no_output_____
###Markdown
Now let's use `tf.cond()` to define the reconstruction targets as the labels `y` if `mask_with_labels` is `True`, or `y_pred` otherwise.
###Code
reconstruction_targets = tf.cond(mask_with_labels, # condition
lambda: y, # if True
lambda: y_pred, # if False
name="reconstruction_targets")
###Output
_____no_output_____
###Markdown
Note that the `tf.cond()` function expects the if-True and if-False tensors to be passed _via_ functions: these functions will be called just once during the graph construction phase (not during the execution phase), similar to `tf.while_loop()`. This allows TensorFlow to add the necessary operations to handle the conditional evaluation of the if-True or if-False tensors. However, in our case, the tensors `y` and `y_pred` are already created by the time we call `tf.cond()`, so unfortunately TensorFlow will consider both `y` and `y_pred` to be dependencies of the `reconstruction_targets` tensor. The `reconstruction_targets` tensor will end up with the correct value, but:1. whenever we evaluate a tensor that depends on `reconstruction_targets`, the `y_pred` tensor will be evaluated (even if `mask_with_layers` is `True`). This is not a big deal because computing `y_pred` adds no computing overhead during training, since we need it anyway to compute the margin loss. And during testing, if we are doing classification, we won't need reconstructions, so `reconstruction_targets` won't be evaluated at all.2. we will always need to feed a value for the `y` placeholder (even if `mask_with_layers` is `False`). This is a bit annoying, but we can pass an empty array, because TensorFlow won't use it anyway (it just does not know it yet when it checks for dependencies). Now that we have the reconstruction targets, let's create the reconstruction mask. It should be equal to 1.0 for the target class, and 0.0 for the other classes, for each instance. For this we can just use the `tf.one_hot()` function:
###Code
reconstruction_mask = tf.one_hot(reconstruction_targets,
depth=caps2_n_caps,
name="reconstruction_mask")
###Output
_____no_output_____
###Markdown
Let's check the shape of `reconstruction_mask`:
###Code
reconstruction_mask
###Output
_____no_output_____
###Markdown
Let's compare this to the shape of `caps2_output`:
###Code
caps2_output
###Output
_____no_output_____
###Markdown
Mmh, its shape is (_batch size_, 1, 10, 16, 1). We want to multiply it by the `reconstruction_mask`, but the shape of the `reconstruction_mask` is (_batch size_, 10). We must reshape it to (_batch size_, 1, 10, 1, 1) to make multiplication possible:
###Code
reconstruction_mask_reshaped = tf.reshape(
reconstruction_mask, [-1, 1, caps2_n_caps, 1, 1],
name="reconstruction_mask_reshaped")
###Output
_____no_output_____
###Markdown
At last! We can apply the mask:
###Code
caps2_output_masked = tf.multiply(
caps2_output, reconstruction_mask_reshaped,
name="caps2_output_masked")
caps2_output_masked
###Output
_____no_output_____
###Markdown
One last reshape operation to flatten the decoder's inputs:
###Code
decoder_input = tf.reshape(caps2_output_masked,
[-1, caps2_n_caps * caps2_n_dims],
name="decoder_input")
###Output
_____no_output_____
###Markdown
This gives us an array of shape (_batch size_, 160):
###Code
decoder_input
###Output
_____no_output_____
###Markdown
Decoder Now let's build the decoder. It's quite simple: two dense (fully connected) ReLU layers followed by a dense output sigmoid layer:
###Code
n_hidden1 = 512
n_hidden2 = 1024
n_output = 28 * 28
with tf.name_scope("decoder"):
hidden1 = tf.layers.dense(decoder_input, n_hidden1,
activation=tf.nn.relu,
name="hidden1")
hidden2 = tf.layers.dense(hidden1, n_hidden2,
activation=tf.nn.relu,
name="hidden2")
decoder_output = tf.layers.dense(hidden2, n_output,
activation=tf.nn.sigmoid,
name="decoder_output")
###Output
_____no_output_____
###Markdown
Reconstruction Loss Now let's compute the reconstruction loss. It is just the squared difference between the input image and the reconstructed image:
###Code
X_flat = tf.reshape(X, [-1, n_output], name="X_flat")
squared_difference = tf.square(X_flat - decoder_output,
name="squared_difference")
reconstruction_loss = tf.reduce_sum(squared_difference,
name="reconstruction_loss")
###Output
_____no_output_____
###Markdown
Final Loss The final loss is the sum of the margin loss and the reconstruction loss (scaled down by a factor of 0.0005 to ensure the margin loss dominates training):
###Code
alpha = 0.0005
loss = tf.add(margin_loss, alpha * reconstruction_loss, name="loss")
###Output
_____no_output_____
###Markdown
Final Touches Accuracy To measure our model's accuracy, we need to count the number of instances that are properly classified. For this, we can simply compare `y` and `y_pred`, convert the boolean value to a float32 (0.0 for False, 1.0 for True), and compute the mean over all the instances:
###Code
correct = tf.equal(y, y_pred, name="correct")
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy")
###Output
_____no_output_____
###Markdown
Training Operations The paper mentions that the authors used the Adam optimizer with TensorFlow's default parameters:
###Code
optimizer = tf.train.AdamOptimizer()
training_op = optimizer.minimize(loss, name="training_op")
###Output
_____no_output_____
###Markdown
Init and Saver And let's add the usual variable initializer, as well as a `Saver`:
###Code
init = tf.global_variables_initializer()
saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
And... we're done with the construction phase! Please take a moment to celebrate. :) Training Training our capsule network is pretty standard. For simplicity, we won't do any fancy hyperparameter tuning, dropout or anything, we will just run the training operation over and over again, displaying the loss, and at the end of each epoch, measure the accuracy on the validation set, display it, and save the model if the validation loss is the lowest seen found so far (this is a basic way to implement early stopping, without actually stopping). Hopefully the code should be self-explanatory, but here are a few details to note:* if a checkpoint file exists, it will be restored (this makes it possible to interrupt training, then restart it later from the last checkpoint),* we must not forget to feed `mask_with_labels=True` during training,* during testing, we let `mask_with_labels` default to `False` (but we still feed the labels since they are required to compute the accuracy),* the images loaded _via_ `mnist.train.next_batch()` are represented as `float32` arrays of shape \[784\], but the input placeholder `X` expects a `float32` array of shape \[28, 28, 1\], so we must reshape the images before we feed them to our model,* we evaluate the model's loss and accuracy on the full validation set (5,000 instances). To view progress and support systems that don't have a lot of RAM, the code evaluates the loss and accuracy on one batch at a time, and computes the mean loss and mean accuracy at the end.*Warning*: if you don't have a GPU, training will take a very long time (at least a few hours). With a GPU, it should take just a few minutes per epoch (e.g., 6 minutes on an NVidia GeForce GTX 1080Ti).
###Code
n_epochs = 10
batch_size = 50
restore_checkpoint = True
n_iterations_per_epoch = mnist.train.num_examples // batch_size
n_iterations_validation = mnist.validation.num_examples // batch_size
best_loss_val = np.infty
checkpoint_path = "./my_capsule_network"
with tf.Session() as sess:
if restore_checkpoint and tf.train.checkpoint_exists(checkpoint_path):
saver.restore(sess, checkpoint_path)
else:
init.run()
for epoch in range(n_epochs):
for iteration in range(1, n_iterations_per_epoch + 1):
X_batch, y_batch = mnist.train.next_batch(batch_size)
# Run the training operation and measure the loss:
_, loss_train = sess.run(
[training_op, loss],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch,
mask_with_labels: True})
print("\rIteration: {}/{} ({:.1f}%) Loss: {:.5f}".format(
iteration, n_iterations_per_epoch,
iteration * 100 / n_iterations_per_epoch,
loss_train),
end="")
# At the end of each epoch,
# measure the validation loss and accuracy:
loss_vals = []
acc_vals = []
for iteration in range(1, n_iterations_validation + 1):
X_batch, y_batch = mnist.validation.next_batch(batch_size)
loss_val, acc_val = sess.run(
[loss, accuracy],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch})
loss_vals.append(loss_val)
acc_vals.append(acc_val)
print("\rEvaluating the model: {}/{} ({:.1f}%)".format(
iteration, n_iterations_validation,
iteration * 100 / n_iterations_validation),
end=" " * 10)
loss_val = np.mean(loss_vals)
acc_val = np.mean(acc_vals)
print("\rEpoch: {} Val accuracy: {:.4f}% Loss: {:.6f}{}".format(
epoch + 1, acc_val * 100, loss_val,
" (improved)" if loss_val < best_loss_val else ""))
# And save the model if it improved:
if loss_val < best_loss_val:
save_path = saver.save(sess, checkpoint_path)
best_loss_val = loss_val
###Output
Epoch: 1 Val accuracy: 98.7000% Loss: 0.416563 (improved)
Epoch: 2 Val accuracy: 99.0400% Loss: 0.291740 (improved)
Epoch: 3 Val accuracy: 99.1200% Loss: 0.241666 (improved)
Epoch: 4 Val accuracy: 99.2800% Loss: 0.211442 (improved)
Epoch: 5 Val accuracy: 99.3200% Loss: 0.196026 (improved)
Epoch: 6 Val accuracy: 99.3600% Loss: 0.186166 (improved)
Epoch: 7 Val accuracy: 99.3400% Loss: 0.179290 (improved)
Epoch: 8 Val accuracy: 99.3800% Loss: 0.173593 (improved)
Epoch: 9 Val accuracy: 99.3600% Loss: 0.169071 (improved)
Epoch: 10 Val accuracy: 99.3400% Loss: 0.165477 (improved)
###Markdown
Training is finished, we reached over 99.3% accuracy on the validation set after just 5 epochs, things are looking good. Now let's evaluate the model on the test set. Evaluation
###Code
n_iterations_test = mnist.test.num_examples // batch_size
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
loss_tests = []
acc_tests = []
for iteration in range(1, n_iterations_test + 1):
X_batch, y_batch = mnist.test.next_batch(batch_size)
loss_test, acc_test = sess.run(
[loss, accuracy],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch})
loss_tests.append(loss_test)
acc_tests.append(acc_test)
print("\rEvaluating the model: {}/{} ({:.1f}%)".format(
iteration, n_iterations_test,
iteration * 100 / n_iterations_test),
end=" " * 10)
loss_test = np.mean(loss_tests)
acc_test = np.mean(acc_tests)
print("\rFinal test accuracy: {:.4f}% Loss: {:.6f}".format(
acc_test * 100, loss_test))
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
Final test accuracy: 99.4300% Loss: 0.165047
###Markdown
We reach 99.43% accuracy on the test set. Pretty nice. :) Predictions Now let's make some predictions! We first fix a few images from the test set, then we start a session, restore the trained model, evaluate `caps2_output` to get the capsule network's output vectors, `decoder_output` to get the reconstructions, and `y_pred` to get the class predictions:
###Code
n_samples = 5
sample_images = mnist.test.images[:n_samples].reshape([-1, 28, 28, 1])
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
caps2_output_value, decoder_output_value, y_pred_value = sess.run(
[caps2_output, decoder_output, y_pred],
feed_dict={X: sample_images,
y: np.array([], dtype=np.int64)})
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
###Markdown
Note: we feed `y` with an empty array, but TensorFlow will not use it, as explained earlier. And now let's plot the images and their labels, followed by the corresponding reconstructions and predictions:
###Code
sample_images = sample_images.reshape(-1, 28, 28)
reconstructions = decoder_output_value.reshape([-1, 28, 28])
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
plt.imshow(sample_images[index], cmap="binary")
plt.title("Label:" + str(mnist.test.labels[index]))
plt.axis("off")
plt.show()
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
plt.title("Predicted:" + str(y_pred_value[index]))
plt.imshow(reconstructions[index], cmap="binary")
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
The predictions are all correct, and the reconstructions look great. Hurray! Interpreting the Output Vectors Let's tweak the output vectors to see what their pose parameters represent. First, let's check the shape of the `cap2_output_value` NumPy array:
###Code
caps2_output_value.shape
###Output
_____no_output_____
###Markdown
Let's create a function that will tweak each of the 16 pose parameters (dimensions) in all output vectors. Each tweaked output vector will be identical to the original output vector, except that one of its pose parameters will be incremented by a value varying from -0.5 to 0.5. By default there will be 11 steps (-0.5, -0.4, ..., +0.4, +0.5). This function will return an array of shape (_tweaked pose parameters_=16, _steps_=11, _batch size_=5, 1, 10, 16, 1):
###Code
def tweak_pose_parameters(output_vectors, min=-0.5, max=0.5, n_steps=11):
steps = np.linspace(min, max, n_steps) # -0.25, -0.15, ..., +0.25
pose_parameters = np.arange(caps2_n_dims) # 0, 1, ..., 15
tweaks = np.zeros([caps2_n_dims, n_steps, 1, 1, 1, caps2_n_dims, 1])
tweaks[pose_parameters, :, 0, 0, 0, pose_parameters, 0] = steps
output_vectors_expanded = output_vectors[np.newaxis, np.newaxis]
return tweaks + output_vectors_expanded
###Output
_____no_output_____
###Markdown
Let's compute all the tweaked output vectors and reshape the result to (_parameters_×_steps_×_instances_, 1, 10, 16, 1) so we can feed the array to the decoder:
###Code
n_steps = 11
tweaked_vectors = tweak_pose_parameters(caps2_output_value, n_steps=n_steps)
tweaked_vectors_reshaped = tweaked_vectors.reshape(
[-1, 1, caps2_n_caps, caps2_n_dims, 1])
###Output
_____no_output_____
###Markdown
Now let's feed these tweaked output vectors to the decoder and get the reconstructions it produces:
###Code
tweak_labels = np.tile(mnist.test.labels[:n_samples], caps2_n_dims * n_steps)
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
decoder_output_value = sess.run(
decoder_output,
feed_dict={caps2_output: tweaked_vectors_reshaped,
mask_with_labels: True,
y: tweak_labels})
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
###Markdown
Let's reshape the decoder's output so we can easily iterate on the output dimension, the tweak steps, and the instances:
###Code
tweak_reconstructions = decoder_output_value.reshape(
[caps2_n_dims, n_steps, n_samples, 28, 28])
###Output
_____no_output_____
###Markdown
Lastly, let's plot all the reconstructions, for the first 3 output dimensions, for each tweaking step (column) and each digit (row):
###Code
for dim in range(3):
print("Tweaking output dimension #{}".format(dim))
plt.figure(figsize=(n_steps / 1.2, n_samples / 1.5))
for row in range(n_samples):
for col in range(n_steps):
plt.subplot(n_samples, n_steps, row * n_steps + col + 1)
plt.imshow(tweak_reconstructions[dim, col, row], cmap="binary")
plt.axis("off")
plt.show()
###Output
Tweaking output dimension #0
###Markdown
Capsule Networks (CapsNets) Based on the paper: [Dynamic Routing Between Capsules](https://arxiv.org/abs/1710.09829), by Sara Sabour, Nicholas Frosst and Geoffrey E. Hinton (NIPS 2017). Inspired in part from Huadong Liao's implementation: [CapsNet-TensorFlow](https://github.com/naturomics/CapsNet-Tensorflow). Introduction Watch [this video](https://youtu.be/pPN8d0E3900) to understand the key ideas behind Capsule Networks:
###Code
from IPython.display import HTML
HTML("""<iframe width="560" height="315" src="https://www.youtube.com/embed/pPN8d0E3900" frameborder="0" allowfullscreen></iframe>""")
###Output
_____no_output_____
###Markdown
You may also want to watch [this video](https://youtu.be/2Kawrd5szHE), which presents the main difficulties in this notebook:
###Code
HTML("""<iframe width="560" height="315" src="https://www.youtube.com/embed/2Kawrd5szHE" frameborder="0" allowfullscreen></iframe>""")
###Output
_____no_output_____
###Markdown
Imports To support both Python 2 and Python 3:
###Code
from __future__ import division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
To plot pretty figures:
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
We will need NumPy and TensorFlow:
###Code
import numpy as np
import tensorflow as tf
###Output
/Users/ageron/.virtualenvs/ml/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(*args, **kwds)
###Markdown
Reproducibility Let's reset the default graph, in case you re-run this notebook without restarting the kernel:
###Code
tf.reset_default_graph()
###Output
_____no_output_____
###Markdown
Let's set the random seeds so that this notebook always produces the same output:
###Code
np.random.seed(42)
tf.set_random_seed(42)
###Output
_____no_output_____
###Markdown
Load MNIST Yes, I know, it's MNIST again. But hopefully this powerful idea will work as well on larger datasets, time will tell.
###Code
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
###Output
Extracting /tmp/data/train-images-idx3-ubyte.gz
Extracting /tmp/data/train-labels-idx1-ubyte.gz
Extracting /tmp/data/t10k-images-idx3-ubyte.gz
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
###Markdown
Let's look at what these hand-written digit images look like:
###Code
n_samples = 5
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
sample_image = mnist.train.images[index].reshape(28, 28)
plt.imshow(sample_image, cmap="binary")
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
And these are the corresponding labels:
###Code
mnist.train.labels[:n_samples]
###Output
_____no_output_____
###Markdown
Now let's build a Capsule Network to classify these images. Here's the overall architecture, enjoy the ASCII art! ;-)Note: for readability, I left out two arrows: Labels → Mask, and Input Images → Reconstruction Loss. ``` Loss ↑ ┌─────────┴─────────┐ Labels → Margin Loss Reconstruction Loss ↑ ↑ Length Decoder ↑ ↑ Digit Capsules ────Mask────┘ ↖↑↗ ↖↑↗ ↖↑↗ Primary Capsules ↑ Input Images``` We are going to build the graph starting from the bottom layer, and gradually move up, left side first. Let's go! Input Images Let's start by creating a placeholder for the input images (28×28 pixels, 1 color channel = grayscale).
###Code
X = tf.placeholder(shape=[None, 28, 28, 1], dtype=tf.float32, name="X")
###Output
_____no_output_____
###Markdown
Primary Capsules The first layer will be composed of 32 maps of 6×6 capsules each, where each capsule will output an 8D activation vector:
###Code
caps1_n_maps = 32
caps1_n_caps = caps1_n_maps * 6 * 6 # 1152 primary capsules
caps1_n_dims = 8
###Output
_____no_output_____
###Markdown
To compute their outputs, we first apply two regular convolutional layers:
###Code
conv1_params = {
"filters": 256,
"kernel_size": 9,
"strides": 1,
"padding": "valid",
"activation": tf.nn.relu,
}
conv2_params = {
"filters": caps1_n_maps * caps1_n_dims, # 256 convolutional filters
"kernel_size": 9,
"strides": 2,
"padding": "valid",
"activation": tf.nn.relu
}
conv1 = tf.layers.conv2d(X, name="conv1", **conv1_params)
conv2 = tf.layers.conv2d(conv1, name="conv2", **conv2_params)
###Output
_____no_output_____
###Markdown
Note: since we used a kernel size of 9 and no padding (for some reason, that's what `"valid"` means), the image shrunk by 9-1=8 pixels after each convolutional layer (28×28 to 20×20, then 20×20 to 12×12), and since we used a stride of 2 in the second convolutional layer, the image size was divided by 2. This is how we end up with 6×6 feature maps. Next, we reshape the output to get a bunch of 8D vectors representing the outputs of the primary capsules. The output of `conv2` is an array containing 32×8=256 feature maps for each instance, where each feature map is 6×6. So the shape of this output is (_batch size_, 6, 6, 256). We want to chop the 256 into 32 vectors of 8 dimensions each. We could do this by reshaping to (_batch size_, 6, 6, 32, 8). However, since this first capsule layer will be fully connected to the next capsule layer, we can simply flatten the 6×6 grids. This means we just need to reshape to (_batch size_, 6×6×32, 8).
###Code
caps1_raw = tf.reshape(conv2, [-1, caps1_n_caps, caps1_n_dims],
name="caps1_raw")
###Output
_____no_output_____
###Markdown
Now we need to squash these vectors. Let's define the `squash()` function, based on equation (1) from the paper:$\operatorname{squash}(\mathbf{s}) = \dfrac{\|\mathbf{s}\|^2}{1 + \|\mathbf{s}\|^2} \dfrac{\mathbf{s}}{\|\mathbf{s}\|}$The `squash()` function will squash all vectors in the given array, along the given axis (by default, the last axis).**Caution**, a nasty bug is waiting to bite you: the derivative of $\|\mathbf{s}\|$ is undefined when $\|\mathbf{s}\|=0$, so we can't just use `tf.norm()`, or else it will blow up during training: if a vector is zero, the gradients will be `nan`, so when the optimizer updates the variables, they will also become `nan`, and from then on you will be stuck in `nan` land. The solution is to implement the norm manually by computing the square root of the sum of squares plus a tiny epsilon value: $\|\mathbf{s}\| \approx \sqrt{\sum\limits_i{{s_i}^2}\,\,+ \epsilon}$.
###Code
def squash(s, axis=-1, epsilon=1e-7, name=None):
with tf.name_scope(name, default_name="squash"):
squared_norm = tf.reduce_sum(tf.square(s), axis=axis,
keep_dims=True)
safe_norm = tf.sqrt(squared_norm + epsilon)
squash_factor = squared_norm / (1. + squared_norm)
unit_vector = s / safe_norm
return squash_factor * unit_vector
###Output
_____no_output_____
###Markdown
Now let's apply this function to get the output $\mathbf{u}_i$ of each primary capsules $i$ :
###Code
caps1_output = squash(caps1_raw, name="caps1_output")
###Output
_____no_output_____
###Markdown
Great! We have the output of the first capsule layer. It wasn't too hard, was it? However, computing the next layer is where the fun really begins. Digit Capsules To compute the output of the digit capsules, we must first compute the predicted output vectors (one for each primary / digit capsule pair). Then we can run the routing by agreement algorithm. Compute the Predicted Output Vectors The digit capsule layer contains 10 capsules (one for each digit) of 16 dimensions each:
###Code
caps2_n_caps = 10
caps2_n_dims = 16
###Output
_____no_output_____
###Markdown
For each capsule $i$ in the first layer, we want to predict the output of every capsule $j$ in the second layer. For this, we will need a transformation matrix $\mathbf{W}_{i,j}$ (one for each pair of capsules ($i$, $j$)), then we can compute the predicted output $\hat{\mathbf{u}}_{j|i} = \mathbf{W}_{i,j} \, \mathbf{u}_i$ (equation (2)-right in the paper). Since we want to transform an 8D vector into a 16D vector, each transformation matrix $\mathbf{W}_{i,j}$ must have a shape of (16, 8). To compute $\hat{\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$), we will use a nice feature of the `tf.matmul()` function: you probably know that it lets you multiply two matrices, but you may not know that it also lets you multiply higher dimensional arrays. It treats the arrays as arrays of matrices, and it performs itemwise matrix multiplication. For example, suppose you have two 4D arrays, each containing a 2×3 grid of matrices. The first contains matrices $\mathbf{A}, \mathbf{B}, \mathbf{C}, \mathbf{D}, \mathbf{E}, \mathbf{F}$ and the second contains matrices $\mathbf{G}, \mathbf{H}, \mathbf{I}, \mathbf{J}, \mathbf{K}, \mathbf{L}$. If you multiply these two 4D arrays using the `tf.matmul()` function, this is what you get:$\pmatrix{\mathbf{A} & \mathbf{B} & \mathbf{C} \\\mathbf{D} & \mathbf{E} & \mathbf{F}} \times\pmatrix{\mathbf{G} & \mathbf{H} & \mathbf{I} \\\mathbf{J} & \mathbf{K} & \mathbf{L}} = \pmatrix{\mathbf{AG} & \mathbf{BH} & \mathbf{CI} \\\mathbf{DJ} & \mathbf{EK} & \mathbf{FL}}$ We can apply this function to compute $\hat{\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$) like this (recall that there are 6×6×32=1152 capsules in the first layer, and 10 in the second layer):$\pmatrix{ \mathbf{W}_{1,1} & \mathbf{W}_{1,2} & \cdots & \mathbf{W}_{1,10} \\ \mathbf{W}_{2,1} & \mathbf{W}_{2,2} & \cdots & \mathbf{W}_{2,10} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{W}_{1152,1} & \mathbf{W}_{1152,2} & \cdots & \mathbf{W}_{1152,10}} \times\pmatrix{ \mathbf{u}_1 & \mathbf{u}_1 & \cdots & \mathbf{u}_1 \\ \mathbf{u}_2 & \mathbf{u}_2 & \cdots & \mathbf{u}_2 \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{u}_{1152} & \mathbf{u}_{1152} & \cdots & \mathbf{u}_{1152}}=\pmatrix{\hat{\mathbf{u}}_{1|1} & \hat{\mathbf{u}}_{2|1} & \cdots & \hat{\mathbf{u}}_{10|1} \\\hat{\mathbf{u}}_{1|2} & \hat{\mathbf{u}}_{2|2} & \cdots & \hat{\mathbf{u}}_{10|2} \\\vdots & \vdots & \ddots & \vdots \\\hat{\mathbf{u}}_{1|1152} & \hat{\mathbf{u}}_{2|1152} & \cdots & \hat{\mathbf{u}}_{10|1152}}$ The shape of the first array is (1152, 10, 16, 8), and the shape of the second array is (1152, 10, 8, 1). Note that the second array must contain 10 identical copies of the vectors $\mathbf{u}_1$ to $\mathbf{u}_{1152}$. To create this array, we will use the handy `tf.tile()` function, which lets you create an array containing many copies of a base array, tiled in any way you want. Oh, wait a second! We forgot one dimension: _batch size_. Say we feed 50 images to the capsule network, it will make predictions for these 50 images simultaneously. So the shape of the first array must be (50, 1152, 10, 16, 8), and the shape of the second array must be (50, 1152, 10, 8, 1). The first layer capsules actually already output predictions for all 50 images, so the second array will be fine, but for the first array, we will need to use `tf.tile()` to have 50 copies of the transformation matrices. Okay, let's start by creating a trainable variable of shape (1, 1152, 10, 16, 8) that will hold all the transformation matrices. The first dimension of size 1 will make this array easy to tile. We initialize this variable randomly using a normal distribution with a standard deviation to 0.1.
###Code
init_sigma = 0.1
W_init = tf.random_normal(
shape=(1, caps1_n_caps, caps2_n_caps, caps2_n_dims, caps1_n_dims),
stddev=init_sigma, dtype=tf.float32, name="W_init")
W = tf.Variable(W_init, name="W")
###Output
_____no_output_____
###Markdown
Now we can create the first array by repeating `W` once per instance:
###Code
batch_size = tf.shape(X)[0]
W_tiled = tf.tile(W, [batch_size, 1, 1, 1, 1], name="W_tiled")
###Output
_____no_output_____
###Markdown
That's it! On to the second array, now. As discussed earlier, we need to create an array of shape (_batch size_, 1152, 10, 8, 1), containing the output of the first layer capsules, repeated 10 times (once per digit, along the third dimension, which is axis=2). The `caps1_output` array has a shape of (_batch size_, 1152, 8), so we first need to expand it twice, to get an array of shape (_batch size_, 1152, 1, 8, 1), then we can repeat it 10 times along the third dimension:
###Code
caps1_output_expanded = tf.expand_dims(caps1_output, -1,
name="caps1_output_expanded")
caps1_output_tile = tf.expand_dims(caps1_output_expanded, 2,
name="caps1_output_tile")
caps1_output_tiled = tf.tile(caps1_output_tile, [1, 1, caps2_n_caps, 1, 1],
name="caps1_output_tiled")
###Output
_____no_output_____
###Markdown
Let's check the shape of the first array:
###Code
W_tiled
###Output
_____no_output_____
###Markdown
Good, and now the second:
###Code
caps1_output_tiled
###Output
_____no_output_____
###Markdown
Yes! Now, to get all the predicted output vectors $\hat{\mathbf{u}}_{j|i}$, we just need to multiply these two arrays using `tf.matmul()`, as explained earlier:
###Code
caps2_predicted = tf.matmul(W_tiled, caps1_output_tiled,
name="caps2_predicted")
###Output
_____no_output_____
###Markdown
Let's check the shape:
###Code
caps2_predicted
###Output
_____no_output_____
###Markdown
Perfect, for each instance in the batch (we don't know the batch size yet, hence the "?") and for each pair of first and second layer capsules (1152×10) we have a 16D predicted output column vector (16×1). We're ready to apply the routing by agreement algorithm! Routing by agreement First let's initialize the raw routing weights $b_{i,j}$ to zero:
###Code
raw_weights = tf.zeros([batch_size, caps1_n_caps, caps2_n_caps, 1, 1],
dtype=np.float32, name="raw_weights")
###Output
_____no_output_____
###Markdown
We will see why we need the last two dimensions of size 1 in a minute. Round 1 First, let's apply the softmax function to compute the routing weights, $\mathbf{c}_{i} = \operatorname{softmax}(\mathbf{b}_i)$ (equation (3) in the paper):
###Code
routing_weights = tf.nn.softmax(raw_weights, dim=2, name="routing_weights")
###Output
_____no_output_____
###Markdown
Now let's compute the weighted sum of all the predicted output vectors for each second-layer capsule, $\mathbf{s}_j = \sum\limits_{i}{c_{i,j}\hat{\mathbf{u}}_{j|i}}$ (equation (2)-left in the paper):
###Code
weighted_predictions = tf.multiply(routing_weights, caps2_predicted,
name="weighted_predictions")
weighted_sum = tf.reduce_sum(weighted_predictions, axis=1, keep_dims=True,
name="weighted_sum")
###Output
_____no_output_____
###Markdown
There are a couple important details to note here:* To perform elementwise matrix multiplication (also called the Hadamard product, noted $\circ$), we use the `tf.multiply()` function. It requires `routing_weights` and `caps2_predicted` to have the same rank, which is why we added two extra dimensions of size 1 to `routing_weights`, earlier.* The shape of `routing_weights` is (_batch size_, 1152, 10, 1, 1) while the shape of `caps2_predicted` is (_batch size_, 1152, 10, 16, 1). Since they don't match on the fourth dimension (1 _vs_ 16), `tf.multiply()` automatically _broadcasts_ the `routing_weights` 16 times along that dimension. If you are not familiar with broadcasting, a simple example might help: $ \pmatrix{1 & 2 & 3 \\ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000} = \pmatrix{1 & 2 & 3 \\ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000 \\ 10 & 100 & 1000} = \pmatrix{10 & 200 & 3000 \\ 40 & 500 & 6000} $ And finally, let's apply the squash function to get the outputs of the second layer capsules at the end of the first iteration of the routing by agreement algorithm, $\mathbf{v}_j = \operatorname{squash}(\mathbf{s}_j)$ :
###Code
caps2_output_round_1 = squash(weighted_sum, axis=-2,
name="caps2_output_round_1")
caps2_output_round_1
###Output
_____no_output_____
###Markdown
Good! We have ten 16D output vectors for each instance, as expected. Round 2 First, let's measure how close each predicted vector $\hat{\mathbf{u}}_{j|i}$ is to the actual output vector $\mathbf{v}_j$ by computing their scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$. * Quick math reminder: if $\vec{a}$ and $\vec{b}$ are two vectors of equal length, and $\mathbf{a}$ and $\mathbf{b}$ are their corresponding column vectors (i.e., matrices with a single column), then $\mathbf{a}^T \mathbf{b}$ (i.e., the matrix multiplication of the transpose of $\mathbf{a}$, and $\mathbf{b}$) is a 1×1 matrix containing the scalar product of the two vectors $\vec{a}\cdot\vec{b}$. In Machine Learning, we generally represent vectors as column vectors, so when we talk about computing the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$, this actually means computing ${\hat{\mathbf{u}}_{j|i}}^T \mathbf{v}_j$. Since we need to compute the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ for each instance, and for each pair of first and second level capsules $(i, j)$, we will once again take advantage of the fact that `tf.matmul()` can multiply many matrices simultaneously. This will require playing around with `tf.tile()` to get all dimensions to match (except for the last 2), just like we did earlier. So let's look at the shape of `caps2_predicted`, which holds all the predicted output vectors $\hat{\mathbf{u}}_{j|i}$ for each instance and each pair of capsules:
###Code
caps2_predicted
###Output
_____no_output_____
###Markdown
And now let's look at the shape of `caps2_output_round_1`, which holds 10 outputs vectors of 16D each, for each instance:
###Code
caps2_output_round_1
###Output
_____no_output_____
###Markdown
To get these shapes to match, we just need to tile the `caps2_output_round_1` array 1152 times (once per primary capsule) along the second dimension:
###Code
caps2_output_round_1_tiled = tf.tile(
caps2_output_round_1, [1, caps1_n_caps, 1, 1, 1],
name="caps2_output_round_1_tiled")
###Output
_____no_output_____
###Markdown
And now we are ready to call `tf.matmul()` (note that we must tell it to transpose the matrices in the first array, to get ${\hat{\mathbf{u}}_{j|i}}^T$ instead of $\hat{\mathbf{u}}_{j|i}$):
###Code
agreement = tf.matmul(caps2_predicted, caps2_output_round_1_tiled,
transpose_a=True, name="agreement")
###Output
_____no_output_____
###Markdown
We can now update the raw routing weights $b_{i,j}$ by simply adding the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ we just computed: $b_{i,j} \gets b_{i,j} + \hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ (see Procedure 1, step 7, in the paper).
###Code
raw_weights_round_2 = tf.add(raw_weights, agreement,
name="raw_weights_round_2")
###Output
_____no_output_____
###Markdown
The rest of round 2 is the same as in round 1:
###Code
routing_weights_round_2 = tf.nn.softmax(raw_weights_round_2,
dim=2,
name="routing_weights_round_2")
weighted_predictions_round_2 = tf.multiply(routing_weights_round_2,
caps2_predicted,
name="weighted_predictions_round_2")
weighted_sum_round_2 = tf.reduce_sum(weighted_predictions_round_2,
axis=1, keep_dims=True,
name="weighted_sum_round_2")
caps2_output_round_2 = squash(weighted_sum_round_2,
axis=-2,
name="caps2_output_round_2")
###Output
_____no_output_____
###Markdown
We could go on for a few more rounds, by repeating exactly the same steps as in round 2, but to keep things short, we will stop here:
###Code
caps2_output = caps2_output_round_2
###Output
_____no_output_____
###Markdown
Static or Dynamic Loop? In the code above, we created different operations in the TensorFlow graph for each round of the routing by agreement algorithm. In other words, it's a static loop.Sure, instead of copy/pasting the code several times, we could have written a `for` loop in Python, but this would not change the fact that the graph would end up containing different operations for each routing iteration. It's actually okay since we generally want less than 5 routing iterations, so the graph won't grow too big.However, you may prefer to implement the routing loop within the TensorFlow graph itself rather than using a Python `for` loop. To do this, you would need to use TensorFlow's `tf.while_loop()` function. This way, all routing iterations would reuse the same operations in the graph, it would be a dynamic loop.For example, here is how to build a small loop that computes the sum of squares from 1 to 100:
###Code
def condition(input, counter):
return tf.less(counter, 100)
def loop_body(input, counter):
output = tf.add(input, tf.square(counter))
return output, tf.add(counter, 1)
with tf.name_scope("compute_sum_of_squares"):
counter = tf.constant(1)
sum_of_squares = tf.constant(0)
result = tf.while_loop(condition, loop_body, [sum_of_squares, counter])
with tf.Session() as sess:
print(sess.run(result))
###Output
(328350, 100)
###Markdown
As you can see, the `tf.while_loop()` function expects the loop condition and body to be provided _via_ two functions. These functions will be called only once by TensorFlow, during the graph construction phase, _not_ while executing the graph. The `tf.while_loop()` function stitches together the graph fragments created by `condition()` and `loop_body()` with some additional operations to create the loop.Also note that during training, TensorFlow will automagically handle backpropogation through the loop, so you don't need to worry about that. Of course, we could have used this one-liner instead! ;-)
###Code
sum([i**2 for i in range(1, 100 + 1)])
###Output
_____no_output_____
###Markdown
Joke aside, apart from reducing the graph size, using a dynamic loop instead of a static loop can help reduce how much GPU RAM you use (if you are using a GPU). Indeed, if you set `swap_memory=True` when calling the `tf.while_loop()` function, TensorFlow will automatically check GPU RAM usage at each loop iteration, and it will take care of swapping memory between the GPU and the CPU when needed. Since CPU memory is much cheaper and abundant than GPU RAM, this can really make a big difference. Estimated Class Probabilities (Length) The lengths of the output vectors represent the class probabilities, so we could just use `tf.norm()` to compute them, but as we saw when discussing the squash function, it would be risky, so instead let's create our own `safe_norm()` function:
###Code
def safe_norm(s, axis=-1, epsilon=1e-7, keep_dims=False, name=None):
with tf.name_scope(name, default_name="safe_norm"):
squared_norm = tf.reduce_sum(tf.square(s), axis=axis,
keep_dims=keep_dims)
return tf.sqrt(squared_norm + epsilon)
y_proba = safe_norm(caps2_output, axis=-2, name="y_proba")
###Output
_____no_output_____
###Markdown
To predict the class of each instance, we can just select the one with the highest estimated probability. To do this, let's start by finding its index using `tf.argmax()`:
###Code
y_proba_argmax = tf.argmax(y_proba, axis=2, name="y_proba")
###Output
_____no_output_____
###Markdown
Let's look at the shape of `y_proba_argmax`:
###Code
y_proba_argmax
###Output
_____no_output_____
###Markdown
That's what we wanted: for each instance, we now have the index of the longest output vector. Let's get rid of the last two dimensions by using `tf.squeeze()` which removes dimensions of size 1. This gives us the capsule network's predicted class for each instance:
###Code
y_pred = tf.squeeze(y_proba_argmax, axis=[1,2], name="y_pred")
y_pred
###Output
_____no_output_____
###Markdown
Okay, we are now ready to define the training operations, starting with the losses. Labels First, we will need a placeholder for the labels:
###Code
y = tf.placeholder(shape=[None], dtype=tf.int64, name="y")
###Output
_____no_output_____
###Markdown
Margin loss The paper uses a special margin loss to make it possible to detect two or more different digits in each image:$ L_k = T_k \max(0, m^{+} - \|\mathbf{v}_k\|)^2 + \lambda (1 - T_k) \max(0, \|\mathbf{v}_k\| - m^{-})^2$* $T_k$ is equal to 1 if the digit of class $k$ is present, or 0 otherwise.* In the paper, $m^{+} = 0.9$, $m^{-} = 0.1$ and $\lambda = 0.5$.* Note that there was an error in the video (at 15:47): the max operations are squared, not the norms. Sorry about that.
###Code
m_plus = 0.9
m_minus = 0.1
lambda_ = 0.5
###Output
_____no_output_____
###Markdown
Since `y` will contain the digit classes, from 0 to 9, to get $T_k$ for every instance and every class, we can just use the `tf.one_hot()` function:
###Code
T = tf.one_hot(y, depth=caps2_n_caps, name="T")
###Output
_____no_output_____
###Markdown
A small example should make it clear what this does:
###Code
with tf.Session():
print(T.eval(feed_dict={y: np.array([0, 1, 2, 3, 9])}))
###Output
[[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]
###Markdown
Now let's compute the norm of the output vector for each output capsule and each instance. First, let's verify the shape of `caps2_output`:
###Code
caps2_output
###Output
_____no_output_____
###Markdown
The 16D output vectors are in the second to last dimension, so let's use the `safe_norm()` function with `axis=-2`:
###Code
caps2_output_norm = safe_norm(caps2_output, axis=-2, keep_dims=True,
name="caps2_output_norm")
###Output
_____no_output_____
###Markdown
Now let's compute $\max(0, m^{+} - \|\mathbf{v}_k\|)^2$, and reshape the result to get a simple matrix of shape (_batch size_, 10):
###Code
present_error_raw = tf.square(tf.maximum(0., m_plus - caps2_output_norm),
name="present_error_raw")
present_error = tf.reshape(present_error_raw, shape=(-1, 10),
name="present_error")
###Output
_____no_output_____
###Markdown
Next let's compute $\max(0, \|\mathbf{v}_k\| - m^{-})^2$ and reshape it:
###Code
absent_error_raw = tf.square(tf.maximum(0., caps2_output_norm - m_minus),
name="absent_error_raw")
absent_error = tf.reshape(absent_error_raw, shape=(-1, 10),
name="absent_error")
###Output
_____no_output_____
###Markdown
We are ready to compute the loss for each instance and each digit:
###Code
L = tf.add(T * present_error, lambda_ * (1.0 - T) * absent_error,
name="L")
###Output
_____no_output_____
###Markdown
Now we can sum the digit losses for each instance ($L_0 + L_1 + \cdots + L_9$), and compute the mean over all instances. This gives us the final margin loss:
###Code
margin_loss = tf.reduce_mean(tf.reduce_sum(L, axis=1), name="margin_loss")
###Output
_____no_output_____
###Markdown
Reconstruction Now let's add a decoder network on top of the capsule network. It is a regular 3-layer fully connected neural network which will learn to reconstruct the input images based on the output of the capsule network. This will force the capsule network to preserve all the information required to reconstruct the digits, across the whole network. This constraint regularizes the model: it reduces the risk of overfitting the training set, and it helps generalize to new digits. Mask The paper mentions that during training, instead of sending all the outputs of the capsule network to the decoder network, we must send only the output vector of the capsule that corresponds to the target digit. All the other output vectors must be masked out. At inference time, we must mask all output vectors except for the longest one, i.e., the one that corresponds to the predicted digit. You can see this in the paper's figure 2 (at 18:15 in the video): all output vectors are masked out, except for the reconstruction target's output vector. We need a placeholder to tell TensorFlow whether we want to mask the output vectors based on the labels (`True`) or on the predictions (`False`, the default):
###Code
mask_with_labels = tf.placeholder_with_default(False, shape=(),
name="mask_with_labels")
###Output
_____no_output_____
###Markdown
Now let's use `tf.cond()` to define the reconstruction targets as the labels `y` if `mask_with_labels` is `True`, or `y_pred` otherwise.
###Code
reconstruction_targets = tf.cond(mask_with_labels, # condition
lambda: y, # if True
lambda: y_pred, # if False
name="reconstruction_targets")
###Output
_____no_output_____
###Markdown
Note that the `tf.cond()` function expects the if-True and if-False tensors to be passed _via_ functions: these functions will be called just once during the graph construction phase (not during the execution phase), similar to `tf.while_loop()`. This allows TensorFlow to add the necessary operations to handle the conditional evaluation of the if-True or if-False tensors. However, in our case, the tensors `y` and `y_pred` are already created by the time we call `tf.cond()`, so unfortunately TensorFlow will consider both `y` and `y_pred` to be dependencies of the `reconstruction_targets` tensor. The `reconstruction_targets` tensor will end up with the correct value, but:1. whenever we evaluate a tensor that depends on `reconstruction_targets`, the `y_pred` tensor will be evaluated (even if `mask_with_layers` is `True`). This is not a big deal because computing `y_pred` adds no computing overhead during training, since we need it anyway to compute the margin loss. And during testing, if we are doing classification, we won't need reconstructions, so `reconstruction_targets` won't be evaluated at all.2. we will always need to feed a value for the `y` placeholder (even if `mask_with_layers` is `False`). This is a bit annoying, but we can pass an empty array, because TensorFlow won't use it anyway (it just does not know it yet when it checks for dependencies). Now that we have the reconstruction targets, let's create the reconstruction mask. It should be equal to 1.0 for the target class, and 0.0 for the other classes, for each instance. For this we can just use the `tf.one_hot()` function:
###Code
reconstruction_mask = tf.one_hot(reconstruction_targets,
depth=caps2_n_caps,
name="reconstruction_mask")
###Output
_____no_output_____
###Markdown
Let's check the shape of `reconstruction_mask`:
###Code
reconstruction_mask
###Output
_____no_output_____
###Markdown
Let's compare this to the shape of `caps2_output`:
###Code
caps2_output
###Output
_____no_output_____
###Markdown
Mmh, its shape is (_batch size_, 1, 10, 16, 1). We want to multiply it by the `reconstruction_mask`, but the shape of the `reconstruction_mask` is (_batch size_, 10). We must reshape it to (_batch size_, 1, 10, 1, 1) to make multiplication possible:
###Code
reconstruction_mask_reshaped = tf.reshape(
reconstruction_mask, [-1, 1, caps2_n_caps, 1, 1],
name="reconstruction_mask_reshaped")
###Output
_____no_output_____
###Markdown
At last! We can apply the mask:
###Code
caps2_output_masked = tf.multiply(
caps2_output, reconstruction_mask_reshaped,
name="caps2_output_masked")
caps2_output_masked
###Output
_____no_output_____
###Markdown
One last reshape operation to flatten the decoder's inputs:
###Code
decoder_input = tf.reshape(caps2_output_masked,
[-1, caps2_n_caps * caps2_n_dims],
name="decoder_input")
###Output
_____no_output_____
###Markdown
This gives us an array of shape (_batch size_, 160):
###Code
decoder_input
###Output
_____no_output_____
###Markdown
Decoder Now let's build the decoder. It's quite simple: two dense (fully connected) ReLU layers followed by a dense output sigmoid layer:
###Code
n_hidden1 = 512
n_hidden2 = 1024
n_output = 28 * 28
with tf.name_scope("decoder"):
hidden1 = tf.layers.dense(decoder_input, n_hidden1,
activation=tf.nn.relu,
name="hidden1")
hidden2 = tf.layers.dense(hidden1, n_hidden2,
activation=tf.nn.relu,
name="hidden2")
decoder_output = tf.layers.dense(hidden2, n_output,
activation=tf.nn.sigmoid,
name="decoder_output")
###Output
_____no_output_____
###Markdown
Reconstruction Loss Now let's compute the reconstruction loss. It is just the squared difference between the input image and the reconstructed image:
###Code
X_flat = tf.reshape(X, [-1, n_output], name="X_flat")
squared_difference = tf.square(X_flat - decoder_output,
name="squared_difference")
reconstruction_loss = tf.reduce_mean(squared_difference,
name="reconstruction_loss")
###Output
_____no_output_____
###Markdown
Final Loss The final loss is the sum of the margin loss and the reconstruction loss (scaled down by a factor of 0.0005 to ensure the margin loss dominates training):
###Code
alpha = 0.0005
loss = tf.add(margin_loss, alpha * reconstruction_loss, name="loss")
###Output
_____no_output_____
###Markdown
Final Touches Accuracy To measure our model's accuracy, we need to count the number of instances that are properly classified. For this, we can simply compare `y` and `y_pred`, convert the boolean value to a float32 (0.0 for False, 1.0 for True), and compute the mean over all the instances:
###Code
correct = tf.equal(y, y_pred, name="correct")
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy")
###Output
_____no_output_____
###Markdown
Training Operations The paper mentions that the authors used the Adam optimizer with TensorFlow's default parameters:
###Code
optimizer = tf.train.AdamOptimizer()
training_op = optimizer.minimize(loss, name="training_op")
###Output
_____no_output_____
###Markdown
Init and Saver And let's add the usual variable initializer, as well as a `Saver`:
###Code
init = tf.global_variables_initializer()
saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
And... we're done with the construction phase! Please take a moment to celebrate. :) Training Training our capsule network is pretty standard. For simplicity, we won't do any fancy hyperparameter tuning, dropout or anything, we will just run the training operation over and over again, displaying the loss, and at the end of each epoch, measure the accuracy on the validation set, display it, and save the model if the validation loss is the lowest seen found so far (this is a basic way to implement early stopping, without actually stopping). Hopefully the code should be self-explanatory, but here are a few details to note:* if a checkpoint file exists, it will be restored (this makes it possible to interrupt training, then restart it later from the last checkpoint),* we must not forget to feed `mask_with_labels=True` during training,* during testing, we let `mask_with_labels` default to `False` (but we still feed the labels since they are required to compute the accuracy),* the images loaded _via_ `mnist.train.next_batch()` are represented as `float32` arrays of shape \[784\], but the input placeholder `X` expects a `float32` array of shape \[28, 28, 1\], so we must reshape the images before we feed them to our model,* we evaluate the model's loss and accuracy on the full validation set (5,000 instances). To view progress and support systems that don't have a lot of RAM, the code evaluates the loss and accuracy on one batch at a time, and computes the mean loss and mean accuracy at the end.*Warning*: if you don't have a GPU, training will take a very long time (at least a few hours). With a GPU, it should take just a few minutes per epoch (e.g., 6 minutes on an NVidia GeForce GTX 1080Ti).
###Code
n_epochs = 10
batch_size = 50
restore_checkpoint = True
n_iterations_per_epoch = mnist.train.num_examples // batch_size
n_iterations_validation = mnist.validation.num_examples // batch_size
best_loss_val = np.infty
checkpoint_path = "./my_capsule_network"
with tf.Session() as sess:
if restore_checkpoint and tf.train.checkpoint_exists(checkpoint_path):
saver.restore(sess, checkpoint_path)
else:
init.run()
for epoch in range(n_epochs):
for iteration in range(1, n_iterations_per_epoch + 1):
X_batch, y_batch = mnist.train.next_batch(batch_size)
# Run the training operation and measure the loss:
_, loss_train = sess.run(
[training_op, loss],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch,
mask_with_labels: True})
print("\rIteration: {}/{} ({:.1f}%) Loss: {:.5f}".format(
iteration, n_iterations_per_epoch,
iteration * 100 / n_iterations_per_epoch,
loss_train),
end="")
# At the end of each epoch,
# measure the validation loss and accuracy:
loss_vals = []
acc_vals = []
for iteration in range(1, n_iterations_validation + 1):
X_batch, y_batch = mnist.validation.next_batch(batch_size)
loss_val, acc_val = sess.run(
[loss, accuracy],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch})
loss_vals.append(loss_val)
acc_vals.append(acc_val)
print("\rEvaluating the model: {}/{} ({:.1f}%)".format(
iteration, n_iterations_validation,
iteration * 100 / n_iterations_validation),
end=" " * 10)
loss_val = np.mean(loss_vals)
acc_val = np.mean(acc_vals)
print("\rEpoch: {} Val accuracy: {:.4f}% Loss: {:.6f}{}".format(
epoch + 1, acc_val * 100, loss_val,
" (improved)" if loss_val < best_loss_val else ""))
# And save the model if it improved:
if loss_val < best_loss_val:
save_path = saver.save(sess, checkpoint_path)
best_loss_val = loss_val
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
Epoch: 1 Val accuracy: 99.4400% Loss: 0.007998 (improved)
Epoch: 2 Val accuracy: 99.3400% Loss: 0.007959 (improved)
Epoch: 3 Val accuracy: 99.4000% Loss: 0.007436 (improved)
Epoch: 4 Val accuracy: 99.4000% Loss: 0.007568
Epoch: 5 Val accuracy: 99.2600% Loss: 0.007464
Epoch: 6 Val accuracy: 99.4800% Loss: 0.006631 (improved)
Epoch: 7 Val accuracy: 99.4000% Loss: 0.006915
Epoch: 8 Val accuracy: 99.4200% Loss: 0.006735
Epoch: 9 Val accuracy: 99.2200% Loss: 0.007709
Epoch: 10 Val accuracy: 99.4000% Loss: 0.007083
###Markdown
Training is finished, we reached over 99.4% accuracy on the validation set after just 5 epochs, things are looking good. Now let's evaluate the model on the test set. Evaluation
###Code
n_iterations_test = mnist.test.num_examples // batch_size
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
loss_tests = []
acc_tests = []
for iteration in range(1, n_iterations_test + 1):
X_batch, y_batch = mnist.test.next_batch(batch_size)
loss_test, acc_test = sess.run(
[loss, accuracy],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch})
loss_tests.append(loss_test)
acc_tests.append(acc_test)
print("\rEvaluating the model: {}/{} ({:.1f}%)".format(
iteration, n_iterations_test,
iteration * 100 / n_iterations_test),
end=" " * 10)
loss_test = np.mean(loss_tests)
acc_test = np.mean(acc_tests)
print("\rFinal test accuracy: {:.4f}% Loss: {:.6f}".format(
acc_test * 100, loss_test))
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
Final test accuracy: 99.5300% Loss: 0.006631
###Markdown
We reach 99.53% accuracy on the test set. Pretty nice. :) Predictions Now let's make some predictions! We first fix a few images from the test set, then we start a session, restore the trained model, evaluate `caps2_output` to get the capsule network's output vectors, `decoder_output` to get the reconstructions, and `y_pred` to get the class predictions:
###Code
n_samples = 5
sample_images = mnist.test.images[:n_samples].reshape([-1, 28, 28, 1])
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
caps2_output_value, decoder_output_value, y_pred_value = sess.run(
[caps2_output, decoder_output, y_pred],
feed_dict={X: sample_images,
y: np.array([], dtype=np.int64)})
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
###Markdown
Note: we feed `y` with an empty array, but TensorFlow will not use it, as explained earlier. And now let's plot the images and their labels, followed by the corresponding reconstructions and predictions:
###Code
sample_images = sample_images.reshape(-1, 28, 28)
reconstructions = decoder_output_value.reshape([-1, 28, 28])
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
plt.imshow(sample_images[index], cmap="binary")
plt.title("Label:" + str(mnist.test.labels[index]))
plt.axis("off")
plt.show()
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
plt.title("Predicted:" + str(y_pred_value[index]))
plt.imshow(reconstructions[index], cmap="binary")
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
The predictions are all correct, and the reconstructions look great. Hurray! Interpreting the Output Vectors Let's tweak the output vectors to see what their pose parameters represent. First, let's check the shape of the `cap2_output_value` NumPy array:
###Code
caps2_output_value.shape
###Output
_____no_output_____
###Markdown
Let's create a function that will tweak each of the 16 pose parameters (dimensions) in all output vectors. Each tweaked output vector will be identical to the original output vector, except that one of its pose parameters will be incremented by a value varying from -0.5 to 0.5. By default there will be 11 steps (-0.5, -0.4, ..., +0.4, +0.5). This function will return an array of shape (_tweaked pose parameters_=16, _steps_=11, _batch size_=5, 1, 10, 16, 1):
###Code
def tweak_pose_parameters(output_vectors, min=-0.5, max=0.5, n_steps=11):
steps = np.linspace(min, max, n_steps) # -0.25, -0.15, ..., +0.25
pose_parameters = np.arange(caps2_n_dims) # 0, 1, ..., 15
tweaks = np.zeros([caps2_n_dims, n_steps, 1, 1, 1, caps2_n_dims, 1])
tweaks[pose_parameters, :, 0, 0, 0, pose_parameters, 0] = steps
output_vectors_expanded = output_vectors[np.newaxis, np.newaxis]
return tweaks + output_vectors_expanded
###Output
_____no_output_____
###Markdown
Let's compute all the tweaked output vectors and reshape the result to (_parameters_×_steps_×_instances_, 1, 10, 16, 1) so we can feed the array to the decoder:
###Code
n_steps = 11
tweaked_vectors = tweak_pose_parameters(caps2_output_value, n_steps=n_steps)
tweaked_vectors_reshaped = tweaked_vectors.reshape(
[-1, 1, caps2_n_caps, caps2_n_dims, 1])
###Output
_____no_output_____
###Markdown
Now let's feed these tweaked output vectors to the decoder and get the reconstructions it produces:
###Code
tweak_labels = np.tile(mnist.test.labels[:n_samples], caps2_n_dims * n_steps)
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
decoder_output_value = sess.run(
decoder_output,
feed_dict={caps2_output: tweaked_vectors_reshaped,
mask_with_labels: True,
y: tweak_labels})
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
###Markdown
Let's reshape the decoder's output so we can easily iterate on the output dimension, the tweak steps, and the instances:
###Code
tweak_reconstructions = decoder_output_value.reshape(
[caps2_n_dims, n_steps, n_samples, 28, 28])
###Output
_____no_output_____
###Markdown
Lastly, let's plot all the reconstructions, for the first 3 output dimensions, for each tweaking step (column) and each digit (row):
###Code
for dim in range(3):
print("Tweaking output dimension #{}".format(dim))
plt.figure(figsize=(n_steps / 1.2, n_samples / 1.5))
for row in range(n_samples):
for col in range(n_steps):
plt.subplot(n_samples, n_steps, row * n_steps + col + 1)
plt.imshow(tweak_reconstructions[dim, col, row], cmap="binary")
plt.axis("off")
plt.show()
###Output
Tweaking output dimension #0
###Markdown
Capsule Networks (CapsNets) Based on the paper: [Dynamic Routing Between Capsules](https://arxiv.org/abs/1710.09829), by Sara Sabour, Nicholas Frosst and Geoffrey E. Hinton (NIPS 2017). Inspired in part from Huadong Liao's implementation: [CapsNet-TensorFlow](https://github.com/naturomics/CapsNet-Tensorflow). Introduction Watch [this video](https://youtu.be/pPN8d0E3900) to understand the key ideas behind Capsule Networks:
###Code
from IPython.display import HTML
HTML("""<iframe width="560" height="315" src="https://www.youtube.com/embed/pPN8d0E3900" frameborder="0" allowfullscreen></iframe>""")
###Output
_____no_output_____
###Markdown
You may also want to watch [this video](https://youtu.be/2Kawrd5szHE), which presents the main difficulties in this notebook:
###Code
HTML("""<iframe width="560" height="315" src="https://www.youtube.com/embed/2Kawrd5szHE" frameborder="0" allowfullscreen></iframe>""")
###Output
_____no_output_____
###Markdown
Imports To support both Python 2 and Python 3:
###Code
from __future__ import division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
To plot pretty figures:
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
We will need NumPy and TensorFlow:
###Code
import numpy as np
import tensorflow as tf
###Output
/Users/ageron/.virtualenvs/ml/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(*args, **kwds)
###Markdown
Reproducibility Let's reset the default graph, in case you re-run this notebook without restarting the kernel:
###Code
tf.reset_default_graph()
###Output
_____no_output_____
###Markdown
Let's set the random seeds so that this notebook always produces the same output:
###Code
np.random.seed(42)
tf.set_random_seed(42)
###Output
_____no_output_____
###Markdown
Load MNIST Yes, I know, it's MNIST again. But hopefully this powerful idea will work as well on larger datasets, time will tell.
###Code
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
###Output
Extracting /tmp/data/train-images-idx3-ubyte.gz
Extracting /tmp/data/train-labels-idx1-ubyte.gz
Extracting /tmp/data/t10k-images-idx3-ubyte.gz
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
###Markdown
Let's look at what these hand-written digit images look like:
###Code
n_samples = 5
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
sample_image = mnist.train.images[index].reshape(28, 28)
plt.imshow(sample_image, cmap="binary")
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
And these are the corresponding labels:
###Code
mnist.train.labels[:n_samples]
###Output
_____no_output_____
###Markdown
Now let's build a Capsule Network to classify these images. Here's the overall architecture, enjoy the ASCII art! ;-)Note: for readability, I left out two arrows: Labels → Mask, and Input Images → Reconstruction Loss. ``` Loss ↑ ┌─────────┴─────────┐ Labels → Margin Loss Reconstruction Loss ↑ ↑ Length Decoder ↑ ↑ Digit Capsules ────Mask────┘ ↖↑↗ ↖↑↗ ↖↑↗ Primary Capsules ↑ Input Images``` We are going to build the graph starting from the bottom layer, and gradually move up, left side first. Let's go! Input Images Let's start by creating a placeholder for the input images (28×28 pixels, 1 color channel = grayscale).
###Code
X = tf.placeholder(shape=[None, 28, 28, 1], dtype=tf.float32, name="X")
###Output
_____no_output_____
###Markdown
Primary Capsules The first layer will be composed of 32 maps of 6×6 capsules each, where each capsule will output an 8D activation vector:
###Code
caps1_n_maps = 32
caps1_n_caps = caps1_n_maps * 6 * 6 # 1152 primary capsules
caps1_n_dims = 8
###Output
_____no_output_____
###Markdown
To compute their outputs, we first apply two regular convolutional layers:
###Code
conv1_params = {
"filters": 256,
"kernel_size": 9,
"strides": 1,
"padding": "valid",
"activation": tf.nn.relu,
}
conv2_params = {
"filters": caps1_n_maps * caps1_n_dims, # 256 convolutional filters
"kernel_size": 9,
"strides": 2,
"padding": "valid",
"activation": tf.nn.relu
}
conv1 = tf.layers.conv2d(X, name="conv1", **conv1_params)
conv2 = tf.layers.conv2d(conv1, name="conv2", **conv2_params)
###Output
_____no_output_____
###Markdown
Note: since we used a kernel size of 9 and no padding (for some reason, that's what `"valid"` means), the image shrunk by 9-1=8 pixels after each convolutional layer (28×28 to 20×20, then 20×20 to 12×12), and since we used a stride of 2 in the second convolutional layer, the image size was divided by 2. This is how we end up with 6×6 feature maps. Next, we reshape the output to get a bunch of 8D vectors representing the outputs of the primary capsules. The output of `conv2` is an array containing 32×8=256 feature maps for each instance, where each feature map is 6×6. So the shape of this output is (_batch size_, 6, 6, 256). We want to chop the 256 into 32 vectors of 8 dimensions each. We could do this by reshaping to (_batch size_, 6, 6, 32, 8). However, since this first capsule layer will be fully connected to the next capsule layer, we can simply flatten the 6×6 grids. This means we just need to reshape to (_batch size_, 6×6×32, 8).
###Code
caps1_raw = tf.reshape(conv2, [-1, caps1_n_caps, caps1_n_dims],
name="caps1_raw")
###Output
_____no_output_____
###Markdown
Now we need to squash these vectors. Let's define the `squash()` function, based on equation (1) from the paper:$\operatorname{squash}(\mathbf{s}) = \dfrac{\|\mathbf{s}\|^2}{1 + \|\mathbf{s}\|^2} \dfrac{\mathbf{s}}{\|\mathbf{s}\|}$The `squash()` function will squash all vectors in the given array, along the given axis (by default, the last axis).**Caution**, a nasty bug is waiting to bite you: the derivative of $\|\mathbf{s}\|$ is undefined when $\|\mathbf{s}\|=0$, so we can't just use `tf.norm()`, or else it will blow up during training: if a vector is zero, the gradients will be `nan`, so when the optimizer updates the variables, they will also become `nan`, and from then on you will be stuck in `nan` land. The solution is to implement the norm manually by computing the square root of the sum of squares plus a tiny epsilon value: $\|\mathbf{s}\| \approx \sqrt{\sum\limits_i{{s_i}^2}\,\,+ \epsilon}$.
###Code
def squash(s, axis=-1, epsilon=1e-7, name=None):
with tf.name_scope(name, default_name="squash"):
squared_norm = tf.reduce_sum(tf.square(s), axis=axis,
keep_dims=True)
safe_norm = tf.sqrt(squared_norm + epsilon)
squash_factor = squared_norm / (1. + squared_norm)
unit_vector = s / safe_norm
return squash_factor * unit_vector
###Output
_____no_output_____
###Markdown
Now let's apply this function to get the output $\mathbf{u}_i$ of each primary capsules $i$ :
###Code
caps1_output = squash(caps1_raw, name="caps1_output")
###Output
_____no_output_____
###Markdown
Great! We have the output of the first capsule layer. It wasn't too hard, was it? However, computing the next layer is where the fun really begins. Digit Capsules To compute the output of the digit capsules, we must first compute the predicted output vectors (one for each primary / digit capsule pair). Then we can run the routing by agreement algorithm. Compute the Predicted Output Vectors The digit capsule layer contains 10 capsules (one for each digit) of 16 dimensions each:
###Code
caps2_n_caps = 10
caps2_n_dims = 16
###Output
_____no_output_____
###Markdown
For each capsule $i$ in the first layer, we want to predict the output of every capsule $j$ in the second layer. For this, we will need a transformation matrix $\mathbf{W}_{i,j}$ (one for each pair of capsules ($i$, $j$)), then we can compute the predicted output $\hat{\mathbf{u}}_{j|i} = \mathbf{W}_{i,j} \, \mathbf{u}_i$ (equation (2)-right in the paper). Since we want to transform an 8D vector into a 16D vector, each transformation matrix $\mathbf{W}_{i,j}$ must have a shape of (16, 8). To compute $\hat{\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$), we will use a nice feature of the `tf.matmul()` function: you probably know that it lets you multiply two matrices, but you may not know that it also lets you multiply higher dimensional arrays. It treats the arrays as arrays of matrices, and it performs itemwise matrix multiplication. For example, suppose you have two 4D arrays, each containing a 2×3 grid of matrices. The first contains matrices $\mathbf{A}, \mathbf{B}, \mathbf{C}, \mathbf{D}, \mathbf{E}, \mathbf{F}$ and the second contains matrices $\mathbf{G}, \mathbf{H}, \mathbf{I}, \mathbf{J}, \mathbf{K}, \mathbf{L}$. If you multiply these two 4D arrays using the `tf.matmul()` function, this is what you get:$\pmatrix{\mathbf{A} & \mathbf{B} & \mathbf{C} \\\mathbf{D} & \mathbf{E} & \mathbf{F}} \times\pmatrix{\mathbf{G} & \mathbf{H} & \mathbf{I} \\\mathbf{J} & \mathbf{K} & \mathbf{L}} = \pmatrix{\mathbf{AG} & \mathbf{BH} & \mathbf{CI} \\\mathbf{DJ} & \mathbf{EK} & \mathbf{FL}}$ We can apply this function to compute $\hat{\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$) like this (recall that there are 6×6×32=1152 capsules in the first layer, and 10 in the second layer):$\pmatrix{ \mathbf{W}_{1,1} & \mathbf{W}_{1,2} & \cdots & \mathbf{W}_{1,10} \\ \mathbf{W}_{2,1} & \mathbf{W}_{2,2} & \cdots & \mathbf{W}_{2,10} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{W}_{1152,1} & \mathbf{W}_{1152,2} & \cdots & \mathbf{W}_{1152,10}} \times\pmatrix{ \mathbf{u}_1 & \mathbf{u}_1 & \cdots & \mathbf{u}_1 \\ \mathbf{u}_2 & \mathbf{u}_2 & \cdots & \mathbf{u}_2 \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{u}_{1152} & \mathbf{u}_{1152} & \cdots & \mathbf{u}_{1152}}=\pmatrix{\hat{\mathbf{u}}_{1|1} & \hat{\mathbf{u}}_{2|1} & \cdots & \hat{\mathbf{u}}_{10|1} \\\hat{\mathbf{u}}_{1|2} & \hat{\mathbf{u}}_{2|2} & \cdots & \hat{\mathbf{u}}_{10|2} \\\vdots & \vdots & \ddots & \vdots \\\hat{\mathbf{u}}_{1|1152} & \hat{\mathbf{u}}_{2|1152} & \cdots & \hat{\mathbf{u}}_{10|1152}}$ The shape of the first array is (1152, 10, 16, 8), and the shape of the second array is (1152, 10, 8, 1). Note that the second array must contain 10 identical copies of the vectors $\mathbf{u}_1$ to $\mathbf{u}_{1152}$. To create this array, we will use the handy `tf.tile()` function, which lets you create an array containing many copies of a base array, tiled in any way you want. Oh, wait a second! We forgot one dimension: _batch size_. Say we feed 50 images to the capsule network, it will make predictions for these 50 images simultaneously. So the shape of the first array must be (50, 1152, 10, 16, 8), and the shape of the second array must be (50, 1152, 10, 8, 1). The first layer capsules actually already output predictions for all 50 images, so the second array will be fine, but for the first array, we will need to use `tf.tile()` to have 50 copies of the transformation matrices. Okay, let's start by creating a trainable variable of shape (1, 1152, 10, 16, 8) that will hold all the transformation matrices. The first dimension of size 1 will make this array easy to tile. We initialize this variable randomly using a normal distribution with a standard deviation to 0.1.
###Code
init_sigma = 0.1
W_init = tf.random_normal(
shape=(1, caps1_n_caps, caps2_n_caps, caps2_n_dims, caps1_n_dims),
stddev=init_sigma, dtype=tf.float32, name="W_init")
W = tf.Variable(W_init, name="W")
###Output
_____no_output_____
###Markdown
Now we can create the first array by repeating `W` once per instance:
###Code
batch_size = tf.shape(X)[0]
W_tiled = tf.tile(W, [batch_size, 1, 1, 1, 1], name="W_tiled")
###Output
_____no_output_____
###Markdown
That's it! On to the second array, now. As discussed earlier, we need to create an array of shape (_batch size_, 1152, 10, 8, 1), containing the output of the first layer capsules, repeated 10 times (once per digit, along the third dimension, which is axis=2). The `caps1_output` array has a shape of (_batch size_, 1152, 8), so we first need to expand it twice, to get an array of shape (_batch size_, 1152, 1, 8, 1), then we can repeat it 10 times along the third dimension:
###Code
caps1_output_expanded = tf.expand_dims(caps1_output, -1,
name="caps1_output_expanded")
caps1_output_tile = tf.expand_dims(caps1_output_expanded, 2,
name="caps1_output_tile")
caps1_output_tiled = tf.tile(caps1_output_tile, [1, 1, caps2_n_caps, 1, 1],
name="caps1_output_tiled")
###Output
_____no_output_____
###Markdown
Let's check the shape of the first array:
###Code
W_tiled
###Output
_____no_output_____
###Markdown
Good, and now the second:
###Code
caps1_output_tiled
###Output
_____no_output_____
###Markdown
Yes! Now, to get all the predicted output vectors $\hat{\mathbf{u}}_{j|i}$, we just need to multiply these two arrays using `tf.matmul()`, as explained earlier:
###Code
caps2_predicted = tf.matmul(W_tiled, caps1_output_tiled,
name="caps2_predicted")
###Output
_____no_output_____
###Markdown
Let's check the shape:
###Code
caps2_predicted
###Output
_____no_output_____
###Markdown
Perfect, for each instance in the batch (we don't know the batch size yet, hence the "?") and for each pair of first and second layer capsules (1152×10) we have a 16D predicted output column vector (16×1). We're ready to apply the routing by agreement algorithm! Routing by agreement First let's initialize the raw routing weights $b_{i,j}$ to zero:
###Code
raw_weights = tf.zeros([batch_size, caps1_n_caps, caps2_n_caps, 1, 1],
dtype=np.float32, name="raw_weights")
###Output
_____no_output_____
###Markdown
We will see why we need the last two dimensions of size 1 in a minute. Round 1 First, let's apply the softmax function to compute the routing weights, $\mathbf{c}_{i} = \operatorname{softmax}(\mathbf{b}_i)$ (equation (3) in the paper):
###Code
routing_weights = tf.nn.softmax(raw_weights, dim=2, name="routing_weights")
###Output
_____no_output_____
###Markdown
Now let's compute the weighted sum of all the predicted output vectors for each second-layer capsule, $\mathbf{s}_j = \sum\limits_{i}{c_{i,j}\hat{\mathbf{u}}_{j|i}}$ (equation (2)-left in the paper):
###Code
weighted_predictions = tf.multiply(routing_weights, caps2_predicted,
name="weighted_predictions")
weighted_sum = tf.reduce_sum(weighted_predictions, axis=1, keep_dims=True,
name="weighted_sum")
###Output
_____no_output_____
###Markdown
There are a couple important details to note here:* To perform elementwise matrix multiplication (also called the Hadamard product, noted $\circ$), we use the `tf.multiply()` function. It requires `routing_weights` and `caps2_predicted` to have the same rank, which is why we added two extra dimensions of size 1 to `routing_weights`, earlier.* The shape of `routing_weights` is (_batch size_, 1152, 10, 1, 1) while the shape of `caps2_predicted` is (_batch size_, 1152, 10, 16, 1). Since they don't match on the fourth dimension (1 _vs_ 16), `tf.multiply()` automatically _broadcasts_ the `routing_weights` 16 times along that dimension. If you are not familiar with broadcasting, a simple example might help: $ \pmatrix{1 & 2 & 3 \\ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000} = \pmatrix{1 & 2 & 3 \\ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000 \\ 10 & 100 & 1000} = \pmatrix{10 & 200 & 3000 \\ 40 & 500 & 6000} $ And finally, let's apply the squash function to get the outputs of the second layer capsules at the end of the first iteration of the routing by agreement algorithm, $\mathbf{v}_j = \operatorname{squash}(\mathbf{s}_j)$ :
###Code
caps2_output_round_1 = squash(weighted_sum, axis=-2,
name="caps2_output_round_1")
caps2_output_round_1
###Output
_____no_output_____
###Markdown
Good! We have ten 16D output vectors for each instance, as expected. Round 2 First, let's measure how close each predicted vector $\hat{\mathbf{u}}_{j|i}$ is to the actual output vector $\mathbf{v}_j$ by computing their scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$. * Quick math reminder: if $\vec{a}$ and $\vec{b}$ are two vectors of equal length, and $\mathbf{a}$ and $\mathbf{b}$ are their corresponding column vectors (i.e., matrices with a single column), then $\mathbf{a}^T \mathbf{b}$ (i.e., the matrix multiplication of the transpose of $\mathbf{a}$, and $\mathbf{b}$) is a 1×1 matrix containing the scalar product of the two vectors $\vec{a}\cdot\vec{b}$. In Machine Learning, we generally represent vectors as column vectors, so when we talk about computing the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$, this actually means computing ${\hat{\mathbf{u}}_{j|i}}^T \mathbf{v}_j$. Since we need to compute the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ for each instance, and for each pair of first and second level capsules $(i, j)$, we will once again take advantage of the fact that `tf.matmul()` can multiply many matrices simultaneously. This will require playing around with `tf.tile()` to get all dimensions to match (except for the last 2), just like we did earlier. So let's look at the shape of `caps2_predicted`, which holds all the predicted output vectors $\hat{\mathbf{u}}_{j|i}$ for each instance and each pair of capsules:
###Code
caps2_predicted
###Output
_____no_output_____
###Markdown
And now let's look at the shape of `caps2_output_round_1`, which holds 10 outputs vectors of 16D each, for each instance:
###Code
caps2_output_round_1
###Output
_____no_output_____
###Markdown
To get these shapes to match, we just need to tile the `caps2_output_round_1` array 1152 times (once per primary capsule) along the second dimension:
###Code
caps2_output_round_1_tiled = tf.tile(
caps2_output_round_1, [1, caps1_n_caps, 1, 1, 1],
name="caps2_output_round_1_tiled")
###Output
_____no_output_____
###Markdown
And now we are ready to call `tf.matmul()` (note that we must tell it to transpose the matrices in the first array, to get ${\hat{\mathbf{u}}_{j|i}}^T$ instead of $\hat{\mathbf{u}}_{j|i}$):
###Code
agreement = tf.matmul(caps2_predicted, caps2_output_round_1_tiled,
transpose_a=True, name="agreement")
###Output
_____no_output_____
###Markdown
We can now update the raw routing weights $b_{i,j}$ by simply adding the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ we just computed: $b_{i,j} \gets b_{i,j} + \hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ (see Procedure 1, step 7, in the paper).
###Code
raw_weights_round_2 = tf.add(raw_weights, agreement,
name="raw_weights_round_2")
###Output
_____no_output_____
###Markdown
The rest of round 2 is the same as in round 1:
###Code
routing_weights_round_2 = tf.nn.softmax(raw_weights_round_2,
dim=2,
name="routing_weights_round_2")
weighted_predictions_round_2 = tf.multiply(routing_weights_round_2,
caps2_predicted,
name="weighted_predictions_round_2")
weighted_sum_round_2 = tf.reduce_sum(weighted_predictions_round_2,
axis=1, keep_dims=True,
name="weighted_sum_round_2")
caps2_output_round_2 = squash(weighted_sum_round_2,
axis=-2,
name="caps2_output_round_2")
###Output
_____no_output_____
###Markdown
We could go on for a few more rounds, by repeating exactly the same steps as in round 2, but to keep things short, we will stop here:
###Code
caps2_output = caps2_output_round_2
###Output
_____no_output_____
###Markdown
Static or Dynamic Loop? In the code above, we created different operations in the TensorFlow graph for each round of the routing by agreement algorithm. In other words, it's a static loop.Sure, instead of copy/pasting the code several times, we could have written a `for` loop in Python, but this would not change the fact that the graph would end up containing different operations for each routing iteration. It's actually okay since we generally want less than 5 routing iterations, so the graph won't grow too big.However, you may prefer to implement the routing loop within the TensorFlow graph itself rather than using a Python `for` loop. To do this, you would need to use TensorFlow's `tf.while_loop()` function. This way, all routing iterations would reuse the same operations in the graph, it would be a dynamic loop.For example, here is how to build a small loop that computes the sum of squares from 1 to 100:
###Code
def condition(input, counter):
return tf.less(counter, 100)
def loop_body(input, counter):
output = tf.add(input, tf.square(counter))
return output, tf.add(counter, 1)
with tf.name_scope("compute_sum_of_squares"):
counter = tf.constant(1)
sum_of_squares = tf.constant(0)
result = tf.while_loop(condition, loop_body, [sum_of_squares, counter])
with tf.Session() as sess:
print(sess.run(result))
###Output
(328350, 100)
###Markdown
As you can see, the `tf.while_loop()` function expects the loop condition and body to be provided _via_ two functions. These functions will be called only once by TensorFlow, during the graph construction phase, _not_ while executing the graph. The `tf.while_loop()` function stitches together the graph fragments created by `condition()` and `loop_body()` with some additional operations to create the loop.Also note that during training, TensorFlow will automagically handle backpropogation through the loop, so you don't need to worry about that. Of course, we could have used this one-liner instead! ;-)
###Code
sum([i**2 for i in range(1, 100 + 1)])
###Output
_____no_output_____
###Markdown
Joke aside, apart from reducing the graph size, using a dynamic loop instead of a static loop can help reduce how much GPU RAM you use (if you are using a GPU). Indeed, if you set `swap_memory=True` when calling the `tf.while_loop()` function, TensorFlow will automatically check GPU RAM usage at each loop iteration, and it will take care of swapping memory between the GPU and the CPU when needed. Since CPU memory is much cheaper and abundant than GPU RAM, this can really make a big difference. Estimated Class Probabilities (Length) The lengths of the output vectors represent the class probabilities, so we could just use `tf.norm()` to compute them, but as we saw when discussing the squash function, it would be risky, so instead let's create our own `safe_norm()` function:
###Code
def safe_norm(s, axis=-1, epsilon=1e-7, keep_dims=False, name=None):
with tf.name_scope(name, default_name="safe_norm"):
squared_norm = tf.reduce_sum(tf.square(s), axis=axis,
keep_dims=keep_dims)
return tf.sqrt(squared_norm + epsilon)
y_proba = safe_norm(caps2_output, axis=-2, name="y_proba")
###Output
_____no_output_____
###Markdown
To predict the class of each instance, we can just select the one with the highest estimated probability. To do this, let's start by finding its index using `tf.argmax()`:
###Code
y_proba_argmax = tf.argmax(y_proba, axis=2, name="y_proba")
###Output
_____no_output_____
###Markdown
Let's look at the shape of `y_proba_argmax`:
###Code
y_proba_argmax
###Output
_____no_output_____
###Markdown
That's what we wanted: for each instance, we now have the index of the longest output vector. Let's get rid of the last two dimensions by using `tf.squeeze()` which removes dimensions of size 1. This gives us the capsule network's predicted class for each instance:
###Code
y_pred = tf.squeeze(y_proba_argmax, axis=[1,2], name="y_pred")
y_pred
###Output
_____no_output_____
###Markdown
Okay, we are now ready to define the training operations, starting with the losses. Labels First, we will need a placeholder for the labels:
###Code
y = tf.placeholder(shape=[None], dtype=tf.int64, name="y")
###Output
_____no_output_____
###Markdown
Margin loss The paper uses a special margin loss to make it possible to detect two or more different digits in each image:$ L_k = T_k \max(0, m^{+} - \|\mathbf{v}_k\|)^2 + \lambda (1 - T_k) \max(0, \|\mathbf{v}_k\| - m^{-})^2$* $T_k$ is equal to 1 if the digit of class $k$ is present, or 0 otherwise.* In the paper, $m^{+} = 0.9$, $m^{-} = 0.1$ and $\lambda = 0.5$.* Note that there was an error in the video (at 15:47): the max operations are squared, not the norms. Sorry about that.
###Code
m_plus = 0.9
m_minus = 0.1
lambda_ = 0.5
###Output
_____no_output_____
###Markdown
Since `y` will contain the digit classes, from 0 to 9, to get $T_k$ for every instance and every class, we can just use the `tf.one_hot()` function:
###Code
T = tf.one_hot(y, depth=caps2_n_caps, name="T")
###Output
_____no_output_____
###Markdown
A small example should make it clear what this does:
###Code
with tf.Session():
print(T.eval(feed_dict={y: np.array([0, 1, 2, 3, 9])}))
###Output
[[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]
###Markdown
Now let's compute the norm of the output vector for each output capsule and each instance. First, let's verify the shape of `caps2_output`:
###Code
caps2_output
###Output
_____no_output_____
###Markdown
The 16D output vectors are in the second to last dimension, so let's use the `safe_norm()` function with `axis=-2`:
###Code
caps2_output_norm = safe_norm(caps2_output, axis=-2, keep_dims=True,
name="caps2_output_norm")
###Output
_____no_output_____
###Markdown
Now let's compute $\max(0, m^{+} - \|\mathbf{v}_k\|)^2$, and reshape the result to get a simple matrix of shape (_batch size_, 10):
###Code
present_error_raw = tf.square(tf.maximum(0., m_plus - caps2_output_norm),
name="present_error_raw")
present_error = tf.reshape(present_error_raw, shape=(-1, 10),
name="present_error")
###Output
_____no_output_____
###Markdown
Next let's compute $\max(0, \|\mathbf{v}_k\| - m^{-})^2$ and reshape it:
###Code
absent_error_raw = tf.square(tf.maximum(0., caps2_output_norm - m_minus),
name="absent_error_raw")
absent_error = tf.reshape(absent_error_raw, shape=(-1, 10),
name="absent_error")
###Output
_____no_output_____
###Markdown
We are ready to compute the loss for each instance and each digit:
###Code
L = tf.add(T * present_error, lambda_ * (1.0 - T) * absent_error,
name="L")
###Output
_____no_output_____
###Markdown
Now we can sum the digit losses for each instance ($L_0 + L_1 + \cdots + L_9$), and compute the mean over all instances. This gives us the final margin loss:
###Code
margin_loss = tf.reduce_mean(tf.reduce_sum(L, axis=1), name="margin_loss")
###Output
_____no_output_____
###Markdown
Reconstruction Now let's add a decoder network on top of the capsule network. It is a regular 3-layer fully connected neural network which will learn to reconstruct the input images based on the output of the capsule network. This will force the capsule network to preserve all the information required to reconstruct the digits, across the whole network. This constraint regularizes the model: it reduces the risk of overfitting the training set, and it helps generalize to new digits. Mask The paper mentions that during training, instead of sending all the outputs of the capsule network to the decoder network, we must send only the output vector of the capsule that corresponds to the target digit. All the other output vectors must be masked out. At inference time, we must mask all output vectors except for the longest one, i.e., the one that corresponds to the predicted digit. You can see this in the paper's figure 2 (at 18:15 in the video): all output vectors are masked out, except for the reconstruction target's output vector. We need a placeholder to tell TensorFlow whether we want to mask the output vectors based on the labels (`True`) or on the predictions (`False`, the default):
###Code
mask_with_labels = tf.placeholder_with_default(False, shape=(),
name="mask_with_labels")
###Output
_____no_output_____
###Markdown
Now let's use `tf.cond()` to define the reconstruction targets as the labels `y` if `mask_with_labels` is `True`, or `y_pred` otherwise.
###Code
reconstruction_targets = tf.cond(mask_with_labels, # condition
lambda: y, # if True
lambda: y_pred, # if False
name="reconstruction_targets")
###Output
_____no_output_____
###Markdown
Note that the `tf.cond()` function expects the if-True and if-False tensors to be passed _via_ functions: these functions will be called just once during the graph construction phase (not during the execution phase), similar to `tf.while_loop()`. This allows TensorFlow to add the necessary operations to handle the conditional evaluation of the if-True or if-False tensors. However, in our case, the tensors `y` and `y_pred` are already created by the time we call `tf.cond()`, so unfortunately TensorFlow will consider both `y` and `y_pred` to be dependencies of the `reconstruction_targets` tensor. The `reconstruction_targets` tensor will end up with the correct value, but:1. whenever we evaluate a tensor that depends on `reconstruction_targets`, the `y_pred` tensor will be evaluated (even if `mask_with_layers` is `True`). This is not a big deal because computing `y_pred` adds no computing overhead during training, since we need it anyway to compute the margin loss. And during testing, if we are doing classification, we won't need reconstructions, so `reconstruction_targets` won't be evaluated at all.2. we will always need to feed a value for the `y` placeholder (even if `mask_with_layers` is `False`). This is a bit annoying, but we can pass an empty array, because TensorFlow won't use it anyway (it just does not know it yet when it checks for dependencies). Now that we have the reconstruction targets, let's create the reconstruction mask. It should be equal to 1.0 for the target class, and 0.0 for the other classes, for each instance. For this we can just use the `tf.one_hot()` function:
###Code
reconstruction_mask = tf.one_hot(reconstruction_targets,
depth=caps2_n_caps,
name="reconstruction_mask")
###Output
_____no_output_____
###Markdown
Let's check the shape of `reconstruction_mask`:
###Code
reconstruction_mask
###Output
_____no_output_____
###Markdown
Let's compare this to the shape of `caps2_output`:
###Code
caps2_output
###Output
_____no_output_____
###Markdown
Mmh, its shape is (_batch size_, 1, 10, 16, 1). We want to multiply it by the `reconstruction_mask`, but the shape of the `reconstruction_mask` is (_batch size_, 10). We must reshape it to (_batch size_, 1, 10, 1, 1) to make multiplication possible:
###Code
reconstruction_mask_reshaped = tf.reshape(
reconstruction_mask, [-1, 1, caps2_n_caps, 1, 1],
name="reconstruction_mask_reshaped")
###Output
_____no_output_____
###Markdown
At last! We can apply the mask:
###Code
caps2_output_masked = tf.multiply(
caps2_output, reconstruction_mask_reshaped,
name="caps2_output_masked")
caps2_output_masked
###Output
_____no_output_____
###Markdown
One last reshape operation to flatten the decoder's inputs:
###Code
decoder_input = tf.reshape(caps2_output_masked,
[-1, caps2_n_caps * caps2_n_dims],
name="decoder_input")
###Output
_____no_output_____
###Markdown
This gives us an array of shape (_batch size_, 160):
###Code
decoder_input
###Output
_____no_output_____
###Markdown
Decoder Now let's build the decoder. It's quite simple: two dense (fully connected) ReLU layers followed by a dense output sigmoid layer:
###Code
n_hidden1 = 512
n_hidden2 = 1024
n_output = 28 * 28
with tf.name_scope("decoder"):
hidden1 = tf.layers.dense(decoder_input, n_hidden1,
activation=tf.nn.relu,
name="hidden1")
hidden2 = tf.layers.dense(hidden1, n_hidden2,
activation=tf.nn.relu,
name="hidden2")
decoder_output = tf.layers.dense(hidden2, n_output,
activation=tf.nn.sigmoid,
name="decoder_output")
###Output
_____no_output_____
###Markdown
Reconstruction Loss Now let's compute the reconstruction loss. It is just the squared difference between the input image and the reconstructed image:
###Code
X_flat = tf.reshape(X, [-1, n_output], name="X_flat")
squared_difference = tf.square(X_flat - decoder_output,
name="squared_difference")
reconstruction_loss = tf.reduce_mean(squared_difference,
name="reconstruction_loss")
###Output
_____no_output_____
###Markdown
Final Loss The final loss is the sum of the margin loss and the reconstruction loss (scaled down by a factor of 0.0005 to ensure the margin loss dominates training):
###Code
alpha = 0.0005
loss = tf.add(margin_loss, alpha * reconstruction_loss, name="loss")
###Output
_____no_output_____
###Markdown
Final Touches Accuracy To measure our model's accuracy, we need to count the number of instances that are properly classified. For this, we can simply compare `y` and `y_pred`, convert the boolean value to a float32 (0.0 for False, 1.0 for True), and compute the mean over all the instances:
###Code
correct = tf.equal(y, y_pred, name="correct")
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy")
###Output
_____no_output_____
###Markdown
Training Operations The paper mentions that the authors used the Adam optimizer with TensorFlow's default parameters:
###Code
optimizer = tf.train.AdamOptimizer()
training_op = optimizer.minimize(loss, name="training_op")
###Output
_____no_output_____
###Markdown
Init and Saver And let's add the usual variable initializer, as well as a `Saver`:
###Code
init = tf.global_variables_initializer()
saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
And... we're done with the construction phase! Please take a moment to celebrate. :) Training Training our capsule network is pretty standard. For simplicity, we won't do any fancy hyperparameter tuning, dropout or anything, we will just run the training operation over and over again, displaying the loss, and at the end of each epoch, measure the accuracy on the validation set, display it, and save the model if the validation loss is the lowest seen found so far (this is a basic way to implement early stopping, without actually stopping). Hopefully the code should be self-explanatory, but here are a few details to note:* if a checkpoint file exists, it will be restored (this makes it possible to interrupt training, then restart it later from the last checkpoint),* we must not forget to feed `mask_with_labels=True` during training,* during testing, we let `mask_with_labels` default to `False` (but we still feed the labels since they are required to compute the accuracy),* the images loaded _via_ `mnist.train.next_batch()` are represented as `float32` arrays of shape \[784\], but the input placeholder `X` expects a `float32` array of shape \[28, 28, 1\], so we must reshape the images before we feed them to our model,* we evaluate the model's loss and accuracy on the full validation set (5,000 instances). To view progress and support systems that don't have a lot of RAM, the code evaluates the loss and accuracy on one batch at a time, and computes the mean loss and mean accuracy at the end.*Warning*: if you don't have a GPU, training will take a very long time (at least a few hours). With a GPU, it should take just a few minutes per epoch (e.g., 6 minutes on an NVidia GeForce GTX 1080Ti).
###Code
n_epochs = 10
batch_size = 50
restore_checkpoint = True
n_iterations_per_epoch = mnist.train.num_examples // batch_size
n_iterations_validation = mnist.validation.num_examples // batch_size
best_loss_val = np.infty
checkpoint_path = "./my_capsule_network"
with tf.Session() as sess:
if restore_checkpoint and tf.train.checkpoint_exists(checkpoint_path):
saver.restore(sess, checkpoint_path)
else:
init.run()
for epoch in range(n_epochs):
for iteration in range(1, n_iterations_per_epoch + 1):
X_batch, y_batch = mnist.train.next_batch(batch_size)
# Run the training operation and measure the loss:
_, loss_train = sess.run(
[training_op, loss],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch,
mask_with_labels: True})
print("\rIteration: {}/{} ({:.1f}%) Loss: {:.5f}".format(
iteration, n_iterations_per_epoch,
iteration * 100 / n_iterations_per_epoch,
loss_train),
end="")
# At the end of each epoch,
# measure the validation loss and accuracy:
loss_vals = []
acc_vals = []
for iteration in range(1, n_iterations_validation + 1):
X_batch, y_batch = mnist.validation.next_batch(batch_size)
loss_val, acc_val = sess.run(
[loss, accuracy],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch})
loss_vals.append(loss_val)
acc_vals.append(acc_val)
print("\rEvaluating the model: {}/{} ({:.1f}%)".format(
iteration, n_iterations_validation,
iteration * 100 / n_iterations_validation),
end=" " * 10)
loss_val = np.mean(loss_vals)
acc_val = np.mean(acc_vals)
print("\rEpoch: {} Val accuracy: {:.4f}% Loss: {:.6f}{}".format(
epoch + 1, acc_val * 100, loss_val,
" (improved)" if loss_val < best_loss_val else ""))
# And save the model if it improved:
if loss_val < best_loss_val:
save_path = saver.save(sess, checkpoint_path)
best_loss_val = loss_val
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
Epoch: 1 Val accuracy: 99.4400% Loss: 0.007998 (improved)
Epoch: 2 Val accuracy: 99.3400% Loss: 0.007959 (improved)
Epoch: 3 Val accuracy: 99.4000% Loss: 0.007436 (improved)
Epoch: 4 Val accuracy: 99.4000% Loss: 0.007568
Epoch: 5 Val accuracy: 99.2600% Loss: 0.007464
Epoch: 6 Val accuracy: 99.4800% Loss: 0.006631 (improved)
Epoch: 7 Val accuracy: 99.4000% Loss: 0.006915
Epoch: 8 Val accuracy: 99.4200% Loss: 0.006735
Epoch: 9 Val accuracy: 99.2200% Loss: 0.007709
Epoch: 10 Val accuracy: 99.4000% Loss: 0.007083
###Markdown
Training is finished, we reached over 99.4% accuracy on the validation set after just 5 epochs, things are looking good. Now let's evaluate the model on the test set. Evaluation
###Code
n_iterations_test = mnist.test.num_examples // batch_size
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
loss_tests = []
acc_tests = []
for iteration in range(1, n_iterations_test + 1):
X_batch, y_batch = mnist.test.next_batch(batch_size)
loss_test, acc_test = sess.run(
[loss, accuracy],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch})
loss_tests.append(loss_test)
acc_tests.append(acc_test)
print("\rEvaluating the model: {}/{} ({:.1f}%)".format(
iteration, n_iterations_test,
iteration * 100 / n_iterations_test),
end=" " * 10)
loss_test = np.mean(loss_tests)
acc_test = np.mean(acc_tests)
print("\rFinal test accuracy: {:.4f}% Loss: {:.6f}".format(
acc_test * 100, loss_test))
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
Final test accuracy: 99.5300% Loss: 0.006631
###Markdown
We reach 99.53% accuracy on the test set. Pretty nice. :) Predictions Now let's make some predictions! We first fix a few images from the test set, then we start a session, restore the trained model, evaluate `caps2_output` to get the capsule network's output vectors, `decoder_output` to get the reconstructions, and `y_pred` to get the class predictions:
###Code
n_samples = 5
sample_images = mnist.test.images[:n_samples].reshape([-1, 28, 28, 1])
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
caps2_output_value, decoder_output_value, y_pred_value = sess.run(
[caps2_output, decoder_output, y_pred],
feed_dict={X: sample_images,
y: np.array([], dtype=np.int64)})
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
###Markdown
Note: we feed `y` with an empty array, but TensorFlow will not use it, as explained earlier. And now let's plot the images and their labels, followed by the corresponding reconstructions and predictions:
###Code
sample_images = sample_images.reshape(-1, 28, 28)
reconstructions = decoder_output_value.reshape([-1, 28, 28])
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
plt.imshow(sample_images[index], cmap="binary")
plt.title("Label:" + str(mnist.test.labels[index]))
plt.axis("off")
plt.show()
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
plt.title("Predicted:" + str(y_pred_value[index]))
plt.imshow(reconstructions[index], cmap="binary")
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
The predictions are all correct, and the reconstructions look great. Hurray! Interpreting the Output Vectors Let's tweak the output vectors to see what their pose parameters represent. First, let's check the shape of the `cap2_output_value` NumPy array:
###Code
caps2_output_value.shape
###Output
_____no_output_____
###Markdown
Let's create a function that will tweak each of the 16 pose parameters (dimensions) in all output vectors. Each tweaked output vector will be identical to the original output vector, except that one of its pose parameters will be incremented by a value varying from -0.5 to 0.5. By default there will be 11 steps (-0.5, -0.4, ..., +0.4, +0.5). This function will return an array of shape (_tweaked pose parameters_=16, _steps_=11, _batch size_=5, 1, 10, 16, 1):
###Code
def tweak_pose_parameters(output_vectors, min=-0.5, max=0.5, n_steps=11):
steps = np.linspace(min, max, n_steps) # -0.25, -0.15, ..., +0.25
pose_parameters = np.arange(caps2_n_dims) # 0, 1, ..., 15
tweaks = np.zeros([caps2_n_dims, n_steps, 1, 1, 1, caps2_n_dims, 1])
tweaks[pose_parameters, :, 0, 0, 0, pose_parameters, 0] = steps
output_vectors_expanded = output_vectors[np.newaxis, np.newaxis]
return tweaks + output_vectors_expanded
###Output
_____no_output_____
###Markdown
Let's compute all the tweaked output vectors and reshape the result to (_parameters_×_steps_×_instances_, 1, 10, 16, 1) so we can feed the array to the decoder:
###Code
n_steps = 11
tweaked_vectors = tweak_pose_parameters(caps2_output_value, n_steps=n_steps)
tweaked_vectors_reshaped = tweaked_vectors.reshape(
[-1, 1, caps2_n_caps, caps2_n_dims, 1])
###Output
_____no_output_____
###Markdown
Now let's feed these tweaked output vectors to the decoder and get the reconstructions it produces:
###Code
tweak_labels = np.tile(mnist.test.labels[:n_samples], caps2_n_dims * n_steps)
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
decoder_output_value = sess.run(
decoder_output,
feed_dict={caps2_output: tweaked_vectors_reshaped,
mask_with_labels: True,
y: tweak_labels})
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
###Markdown
Let's reshape the decoder's output so we can easily iterate on the output dimension, the tweak steps, and the instances:
###Code
tweak_reconstructions = decoder_output_value.reshape(
[caps2_n_dims, n_steps, n_samples, 28, 28])
###Output
_____no_output_____
###Markdown
Lastly, let's plot all the reconstructions, for the first 3 output dimensions, for each tweaking step (column) and each digit (row):
###Code
for dim in range(3):
print("Tweaking output dimension #{}".format(dim))
plt.figure(figsize=(n_steps / 1.2, n_samples / 1.5))
for row in range(n_samples):
for col in range(n_steps):
plt.subplot(n_samples, n_steps, row * n_steps + col + 1)
plt.imshow(tweak_reconstructions[dim, col, row], cmap="binary")
plt.axis("off")
plt.show()
###Output
Tweaking output dimension #0
###Markdown
Capsule Networks (CapsNets) Based on the paper: [Dynamic Routing Between Capsules](https://arxiv.org/abs/1710.09829), by Sara Sabour, Nicholas Frosst and Geoffrey E. Hinton (NIPS 2017). Inspired in part from Huadong Liao's implementation: [CapsNet-TensorFlow](https://github.com/naturomics/CapsNet-Tensorflow). Introduction Watch [this video](https://www.youtube.com/embed/pPN8d0E3900) to understand the key ideas behind Capsule Networks:
###Code
from IPython.display import HTML
# Display the video in an iframe:
HTML("""<iframe width="560" height="315"
src="https://www.youtube.com/embed/pPN8d0E3900"
frameborder="0"
allowfullscreen></iframe>""")
###Output
_____no_output_____
###Markdown
Imports To support both Python 2 and Python 3:
###Code
from __future__ import division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
To plot pretty figures:
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
We will need NumPy and TensorFlow:
###Code
import numpy as np
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Reproducibility Let's reset the default graph, in case you re-run this notebook without restarting the kernel:
###Code
tf.reset_default_graph()
###Output
_____no_output_____
###Markdown
Let's set the random seeds so that this notebook always produces the same output:
###Code
np.random.seed(42)
tf.set_random_seed(42)
###Output
_____no_output_____
###Markdown
Load MNIST Yes, I know, it's MNIST again. But hopefully this powerful idea will work as well on larger datasets, time will tell.
###Code
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
###Output
Extracting /tmp/data/train-images-idx3-ubyte.gz
Extracting /tmp/data/train-labels-idx1-ubyte.gz
Extracting /tmp/data/t10k-images-idx3-ubyte.gz
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
###Markdown
Let's look at what these hand-written digit images look like:
###Code
n_samples = 5
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
sample_image = mnist.train.images[index].reshape(28, 28)
plt.imshow(sample_image, cmap="binary")
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
And these are the corresponding labels:
###Code
mnist.train.labels[:n_samples]
###Output
_____no_output_____
###Markdown
Now let's build a Capsule Network to classify these images. Here's the overall architecture, enjoy the ASCII art! ;-)Note: for readability, I left out two arrows: Labels → Mask, and Input Images → Reconstruction Loss. ``` Loss ↑ ┌─────────┴─────────┐ Labels → Margin Loss Reconstruction Loss ↑ ↑ Length Decoder ↑ ↑ Digit Capsules ────Mask────┘ ↖↑↗ ↖↑↗ ↖↑↗ Primary Capsules ↑ Input Images``` We are going to build the graph starting from the bottom layer, and gradually move up, left side first. Let's go! Input Images Let's start by creating a placeholder for the input images (28×28 pixels, 1 color channel = grayscale).
###Code
X = tf.placeholder(shape=[None, 28, 28, 1], dtype=tf.float32, name="X")
###Output
_____no_output_____
###Markdown
Primary Capsules The first layer will be composed of 32 maps of 6×6 capsules each, where each capsule will output an 8D activation vector:
###Code
caps1_n_maps = 32
caps1_n_caps = caps1_n_maps * 6 * 6 # 1152 primary capsules
caps1_n_dims = 8
###Output
_____no_output_____
###Markdown
To compute their outputs, we first apply two regular convolutional layers:
###Code
conv1_params = {
"filters": 256,
"kernel_size": 9,
"strides": 1,
"padding": "valid",
"activation": tf.nn.relu,
}
conv2_params = {
"filters": caps1_n_maps * caps1_n_dims, # 256 convolutional filters
"kernel_size": 9,
"strides": 2,
"padding": "valid",
"activation": tf.nn.relu
}
conv1 = tf.layers.conv2d(X, name="conv1", **conv1_params)
conv2 = tf.layers.conv2d(conv1, name="conv2", **conv2_params)
###Output
_____no_output_____
###Markdown
Note: since we used a kernel size of 9 and no padding (for some reason, that's what `"valid"` means), the image shrunk by 9-1=8 pixels after each convolutional layer (28×28 to 20×20, then 20×20 to 12×12), and since we used a stride of 2 in the second convolutional layer, the image size was divided by 2. This is how we end up with 6×6 feature maps. Next, we reshape the output to get a bunch of 8D vectors representing the outputs of the primary capsules. The output of `conv2` is an array containing 32×8=256 feature maps for each instance, where each feature map is 6×6. So the shape of this output is (_batch size_, 6, 6, 256). We want to chop the 256 into 32 vectors of 8 dimensions each. We could do this by reshaping to (_batch size_, 6, 6, 32, 8). However, since this first capsule layer will be fully connected to the next capsule layer, we can simply flatten the 6×6 grids. This means we just need to reshape to (_batch size_, 6×6×32, 8).
###Code
caps1_raw = tf.reshape(conv2, [-1, caps1_n_caps, caps1_n_dims],
name="caps1_raw")
###Output
_____no_output_____
###Markdown
Now we need to squash these vectors. Let's define the `squash()` function, based on equation (1) from the paper:$\operatorname{squash}(\mathbf{s}) = \dfrac{\|\mathbf{s}\|^2}{1 + \|\mathbf{s}\|^2} \dfrac{\mathbf{s}}{\|\mathbf{s}\|}$The `squash()` function will squash all vectors in the given array, along the given axis (by default, the last axis).**Caution**, a nasty bug is waiting to bite you: the derivative of $\|\mathbf{s}\|$ is undefined when $\|\mathbf{s}\|=0$, so we can't just use `tf.norm()`, or else it will blow up during training: if a vector is zero, the gradients will be `nan`, so when the optimizer updates the variables, they will also become `nan`, and from then on you will be stuck in `nan` land. The solution is to implement the norm manually by computing the square root of the sum of squares plus a tiny epsilon value: $\|\mathbf{s}\| \approx \sqrt{\sum\limits_i{{s_i}^2}\,\,+ \epsilon}$.
###Code
def squash(s, axis=-1, epsilon=1e-7, name=None):
with tf.name_scope(name, default_name="squash"):
squared_norm = tf.reduce_sum(tf.square(s), axis=axis,
keep_dims=True)
safe_norm = tf.sqrt(squared_norm + epsilon)
squash_factor = squared_norm / (1. + squared_norm)
unit_vector = s / safe_norm
return squash_factor * unit_vector
###Output
_____no_output_____
###Markdown
Now let's apply this function to get the output $\mathbf{u}_i$ of each primary capsules $i$ :
###Code
caps1_output = squash(caps1_raw, name="caps1_output")
###Output
_____no_output_____
###Markdown
Great! We have the output of the first capsule layer. It wasn't too hard, was it? However, computing the next layer is where the fun really begins. Digit Capsules To compute the output of the digit capsules, we must first compute the predicted output vectors (one for each primary / digit capsule pair). Then we can run the routing by agreement algorithm. Compute the Predicted Output Vectors The digit capsule layer contains 10 capsules (one for each digit) of 16 dimensions each:
###Code
caps2_n_caps = 10
caps2_n_dims = 16
###Output
_____no_output_____
###Markdown
For each capsule $i$ in the first layer, we want to predict the output of every capsule $j$ in the second layer. For this, we will need a transformation matrix $\mathbf{W}_{i,j}$ (one for each pair of capsules ($i$, $j$)), then we can compute the predicted output $\hat{\mathbf{u}}_{j|i} = \mathbf{W}_{i,j} \, \mathbf{u}_i$ (equation (2)-right in the paper). Since we want to transform an 8D vector into a 16D vector, each transformation matrix $\mathbf{W}_{i,j}$ must have a shape of (16, 8). To compute $\hat{\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$), we will use a nice feature of the `tf.matmul()` function: you probably know that it lets you multiply two matrices, but you may not know that it also lets you multiply higher dimensional arrays. It treats the arrays as arrays of matrices, and it performs itemwise matrix multiplication. For example, suppose you have two 4D arrays, each containing a 2×3 grid of matrices. The first contains matrices $\mathbf{A}, \mathbf{B}, \mathbf{C}, \mathbf{D}, \mathbf{E}, \mathbf{F}$ and the second contains matrices $\mathbf{G}, \mathbf{H}, \mathbf{I}, \mathbf{J}, \mathbf{K}, \mathbf{L}$. If you multiply these two 4D arrays using the `tf.matmul()` function, this is what you get:$\pmatrix{\mathbf{A} & \mathbf{B} & \mathbf{C} \\\mathbf{D} & \mathbf{E} & \mathbf{F}} \times\pmatrix{\mathbf{G} & \mathbf{H} & \mathbf{I} \\\mathbf{J} & \mathbf{K} & \mathbf{L}} = \pmatrix{\mathbf{AG} & \mathbf{BH} & \mathbf{CI} \\\mathbf{DJ} & \mathbf{EK} & \mathbf{FL}}$ We can apply this function to compute $\hat{\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$) like this (recall that there are 6×6×32=1152 capsules in the first layer, and 10 in the second layer):$\pmatrix{ \mathbf{W}_{1,1} & \mathbf{W}_{1,2} & \cdots & \mathbf{W}_{1,10} \\ \mathbf{W}_{2,1} & \mathbf{W}_{2,2} & \cdots & \mathbf{W}_{2,10} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{W}_{1152,1} & \mathbf{W}_{1152,2} & \cdots & \mathbf{W}_{1152,10}} \times\pmatrix{ \mathbf{u}_1 & \mathbf{u}_1 & \cdots & \mathbf{u}_1 \\ \mathbf{u}_2 & \mathbf{u}_2 & \cdots & \mathbf{u}_2 \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{u}_{1152} & \mathbf{u}_{1152} & \cdots & \mathbf{u}_{1152}}=\pmatrix{\hat{\mathbf{u}}_{1|1} & \hat{\mathbf{u}}_{2|1} & \cdots & \hat{\mathbf{u}}_{10|1} \\\hat{\mathbf{u}}_{1|2} & \hat{\mathbf{u}}_{2|2} & \cdots & \hat{\mathbf{u}}_{10|2} \\\vdots & \vdots & \ddots & \vdots \\\hat{\mathbf{u}}_{1|1152} & \hat{\mathbf{u}}_{2|1152} & \cdots & \hat{\mathbf{u}}_{10|1152}}$ The shape of the first array is (1152, 10, 16, 8), and the shape of the second array is (1152, 10, 8, 1). Note that the second array must contain 10 identical copies of the vectors $\mathbf{u}_1$ to $\mathbf{u}_{1152}$. To create this array, we will use the handy `tf.tile()` function, which lets you create an array containing many copies of a base array, tiled in any way you want. Oh, wait a second! We forgot one dimension: _batch size_. Say we feed 50 images to the capsule network, it will make predictions for these 50 images simultaneously. So the shape of the first array must be (50, 1152, 10, 16, 8), and the shape of the second array must be (50, 1152, 10, 8, 1). The first layer capsules actually already output predictions for all 50 images, so the second array will be fine, but for the first array, we will need to use `tf.tile()` to have 50 copies of the transformation matrices. Okay, let's start by creating a trainable variable of shape (1, 1152, 10, 16, 8) that will hold all the transformation matrices. The first dimension of size 1 will make this array easy to tile. We initialize this variable randomly using a normal distribution with a standard deviation to 0.01.
###Code
init_sigma = 0.01
W_init = tf.random_normal(
shape=(1, caps1_n_caps, caps2_n_caps, caps2_n_dims, caps1_n_dims),
stddev=init_sigma, dtype=tf.float32, name="W_init")
W = tf.Variable(W_init, name="W")
###Output
_____no_output_____
###Markdown
Now we can create the first array by repeating `W` once per instance:
###Code
batch_size = tf.shape(X)[0]
W_tiled = tf.tile(W, [batch_size, 1, 1, 1, 1], name="W_tiled")
###Output
_____no_output_____
###Markdown
That's it! On to the second array, now. As discussed earlier, we need to create an array of shape (_batch size_, 1152, 10, 8, 1), containing the output of the first layer capsules, repeated 10 times (once per digit, along the third dimension, which is axis=2). The `caps1_output` array has a shape of (_batch size_, 1152, 8), so we first need to expand it twice, to get an array of shape (_batch size_, 1152, 1, 8, 1), then we can repeat it 10 times along the third dimension:
###Code
caps1_output_expanded = tf.expand_dims(caps1_output, -1,
name="caps1_output_expanded")
caps1_output_tile = tf.expand_dims(caps1_output_expanded, 2,
name="caps1_output_tile")
caps1_output_tiled = tf.tile(caps1_output_tile, [1, 1, caps2_n_caps, 1, 1],
name="caps1_output_tiled")
###Output
_____no_output_____
###Markdown
Let's check the shape of the first array:
###Code
W_tiled
###Output
_____no_output_____
###Markdown
Good, and now the second:
###Code
caps1_output_tiled
###Output
_____no_output_____
###Markdown
Yes! Now, to get all the predicted output vectors $\hat{\mathbf{u}}_{j|i}$, we just need to multiply these two arrays using `tf.matmul()`, as explained earlier:
###Code
caps2_predicted = tf.matmul(W_tiled, caps1_output_tiled,
name="caps2_predicted")
###Output
_____no_output_____
###Markdown
Let's check the shape:
###Code
caps2_predicted
###Output
_____no_output_____
###Markdown
Perfect, for each instance in the batch (we don't know the batch size yet, hence the "?") and for each pair of first and second layer capsules (1152×10) we have a 16D predicted output column vector (16×1). We're ready to apply the routing by agreement algorithm! Routing by agreement First let's initialize the raw routing weights $b_{i,j}$ to zero:
###Code
raw_weights = tf.zeros([batch_size, caps1_n_caps, caps2_n_caps, 1, 1],
dtype=np.float32, name="raw_weights")
###Output
_____no_output_____
###Markdown
We will see why we need the last two dimensions of size 1 in a minute. Round 1 First, let's apply the softmax function to compute the routing weights, $\mathbf{c}_{i} = \operatorname{softmax}(\mathbf{b}_i)$ (equation (3) in the paper):
###Code
routing_weights = tf.nn.softmax(raw_weights, dim=2, name="routing_weights")
###Output
_____no_output_____
###Markdown
Now let's compute the weighted sum of all the predicted output vectors for each second-layer capsule, $\mathbf{s}_j = \sum\limits_{i}{c_{i,j}\hat{\mathbf{u}}_{j|i}}$ (equation (2)-left in the paper):
###Code
weighted_predictions = tf.multiply(routing_weights, caps2_predicted,
name="weighted_predictions")
weighted_sum = tf.reduce_sum(weighted_predictions, axis=1, keep_dims=True,
name="weighted_sum")
###Output
_____no_output_____
###Markdown
There are a couple important details to note here:* To perform elementwise matrix multiplication (also called the Hadamard product, noted $\circ$), we use the `tf.multiply()` function. It requires `routing_weights` and `caps2_predicted` to have the same rank, which is why we added two extra dimensions of size 1 to `routing_weights`, earlier.* The shape of `routing_weights` is (_batch size_, 1152, 10, 1, 1) while the shape of `caps2_predicted` is (_batch size_, 1152, 10, 16, 1). Since they don't match on the fourth dimension (1 _vs_ 16), `tf.multiply()` automatically _broadcasts_ the `routing_weights` 16 times along that dimension. If you are not familiar with broadcasting, a simple example might help: $ \pmatrix{1 & 2 & 3 \\ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000} = \pmatrix{1 & 2 & 3 \\ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000 \\ 10 & 100 & 1000} = \pmatrix{10 & 200 & 3000 \\ 40 & 500 & 6000} $ And finally, let's apply the squash function to get the outputs of the second layer capsules at the end of the first iteration of the routing by agreement algorithm, $\mathbf{v}_j = \operatorname{squash}(\mathbf{s}_j)$ :
###Code
caps2_output_round_1 = squash(weighted_sum, axis=-2,
name="caps2_output_round_1")
caps2_output_round_1
###Output
_____no_output_____
###Markdown
Good! We have ten 16D output vectors for each instance, as expected. Round 2 First, let's measure how close each predicted vector $\hat{\mathbf{u}}_{j|i}$ is to the actual output vector $\mathbf{v}_j$ by computing their scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$. * Quick math reminder: if $\vec{a}$ and $\vec{b}$ are two vectors of equal length, and $\mathbf{a}$ and $\mathbf{b}$ are their corresponding column vectors (i.e., matrices with a single column), then $\mathbf{a}^T \mathbf{b}$ (i.e., the matrix multiplication of the transpose of $\mathbf{a}$, and $\mathbf{b}$) is a 1×1 matrix containing the scalar product of the two vectors $\vec{a}\cdot\vec{b}$. In Machine Learning, we generally represent vectors as column vectors, so when we talk about computing the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$, this actually means computing ${\hat{\mathbf{u}}_{j|i}}^T \mathbf{v}_j$. Since we need to compute the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ for each instance, and for each pair of first and second level capsules $(i, j)$, we will once again take advantage of the fact that `tf.matmul()` can multiply many matrices simultaneously. This will require playing around with `tf.tile()` to get all dimensions to match (except for the last 2), just like we did earlier. So let's look at the shape of `caps2_predicted`, which holds all the predicted output vectors $\hat{\mathbf{u}}_{j|i}$ for each instance and each pair of capsules:
###Code
caps2_predicted
###Output
_____no_output_____
###Markdown
And now let's look at the shape of `caps2_output_round_1`, which holds 10 outputs vectors of 16D each, for each instance:
###Code
caps2_output_round_1
###Output
_____no_output_____
###Markdown
To get these shapes to match, we just need to tile the `caps2_output_round_1` array 1152 times (once per primary capsule) along the second dimension:
###Code
caps2_output_round_1_tiled = tf.tile(
caps2_output_round_1, [1, caps1_n_caps, 1, 1, 1],
name="caps2_output_round_1_tiled")
###Output
_____no_output_____
###Markdown
And now we are ready to call `tf.matmul()` (note that we must tell it to transpose the matrices in the first array, to get ${\hat{\mathbf{u}}_{j|i}}^T$ instead of $\hat{\mathbf{u}}_{j|i}$):
###Code
agreement = tf.matmul(caps2_predicted, caps2_output_round_1_tiled,
transpose_a=True, name="agreement")
###Output
_____no_output_____
###Markdown
We can now update the raw routing weights $b_{i,j}$ by simply adding the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ we just computed: $b_{i,j} \gets b_{i,j} + \hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ (see Procedure 1, step 7, in the paper).
###Code
raw_weights_round_2 = tf.add(raw_weights, agreement,
name="raw_weights_round_2")
###Output
_____no_output_____
###Markdown
The rest of round 2 is the same as in round 1:
###Code
routing_weights_round_2 = tf.nn.softmax(raw_weights_round_2,
dim=2,
name="routing_weights_round_2")
weighted_predictions_round_2 = tf.multiply(routing_weights_round_2,
caps2_predicted,
name="weighted_predictions_round_2")
weighted_sum_round_2 = tf.reduce_sum(weighted_predictions_round_2,
axis=1, keep_dims=True,
name="weighted_sum_round_2")
caps2_output_round_2 = squash(weighted_sum_round_2,
axis=-2,
name="caps2_output_round_2")
###Output
_____no_output_____
###Markdown
We could go on for a few more rounds, by repeating exactly the same steps as in round 2, but to keep things short, we will stop here:
###Code
caps2_output = caps2_output_round_2
###Output
_____no_output_____
###Markdown
Static or Dynamic Loop? In the code above, we created different operations in the TensorFlow graph for each round of the routing by agreement algorithm. In other words, it's a static loop.Sure, instead of copy/pasting the code several times, we could have written a `for` loop in Python, but this would not change the fact that the graph would end up containing different operations for each routing iteration. It's actually okay since we generally want less than 5 routing iterations, so the graph won't grow too big.However, you may prefer to implement the routing loop within the TensorFlow graph itself rather than using a Python `for` loop. To do this, you would need to use TensorFlow's `tf.while_loop()` function. This way, all routing iterations would reuse the same operations in the graph, it would be a dynamic loop.For example, here is how to build a small loop that computes the sum of squares from 1 to 100:
###Code
def condition(input, counter):
return tf.less(counter, 100)
def loop_body(input, counter):
output = tf.add(input, tf.square(counter))
return output, tf.add(counter, 1)
with tf.name_scope("compute_sum_of_squares"):
counter = tf.constant(1)
sum_of_squares = tf.constant(0)
result = tf.while_loop(condition, loop_body, [sum_of_squares, counter])
with tf.Session() as sess:
print(sess.run(result))
###Output
(328350, 100)
###Markdown
As you can see, the `tf.while_loop()` function expects the loop condition and body to be provided _via_ two functions. These functions will be called only once by TensorFlow, during the graph construction phase, _not_ while executing the graph. The `tf.while_loop()` function stitches together the graph fragments created by `condition()` and `loop_body()` with some additional operations to create the loop.Also note that during training, TensorFlow will automagically handle backpropogation through the loop, so you don't need to worry about that. Of course, we could have used this one-liner instead! ;-)
###Code
sum([i**2 for i in range(1, 100 + 1)])
###Output
_____no_output_____
###Markdown
Joke aside, apart from reducing the graph size, using a dynamic loop instead of a static loop can help reduce how much GPU RAM you use (if you are using a GPU). Indeed, if you set `swap_memory=True` when calling the `tf.while_loop()` function, TensorFlow will automatically check GPU RAM usage at each loop iteration, and it will take care of swapping memory between the GPU and the CPU when needed. Since CPU memory is much cheaper and abundant than GPU RAM, this can really make a big difference. Estimated Class Probabilities (Length) The lengths of the output vectors represent the class probabilities, so we could just use `tf.norm()` to compute them, but as we saw when discussing the squash function, it would be risky, so instead let's create our own `safe_norm()` function:
###Code
def safe_norm(s, axis=-1, epsilon=1e-7, keep_dims=False, name=None):
with tf.name_scope(name, default_name="safe_norm"):
squared_norm = tf.reduce_sum(tf.square(s), axis=axis,
keep_dims=keep_dims)
return tf.sqrt(squared_norm + epsilon)
y_proba = safe_norm(caps2_output, axis=-2, name="y_proba")
###Output
_____no_output_____
###Markdown
To predict the class of each instance, we can just select the one with the highest estimated probability. To do this, let's start by finding its index using `tf.argmax()`:
###Code
y_proba_argmax = tf.argmax(y_proba, axis=2, name="y_proba")
###Output
_____no_output_____
###Markdown
Let's look at the shape of `y_proba_argmax`:
###Code
y_proba_argmax
###Output
_____no_output_____
###Markdown
That's what we wanted: for each instance, we now have the index of the longest output vector. Let's get rid of the last two dimensions by using `tf.squeeze()` which removes dimensions of size 1. This gives us the capsule network's predicted class for each instance:
###Code
y_pred = tf.squeeze(y_proba_argmax, axis=[1,2], name="y_pred")
y_pred
###Output
_____no_output_____
###Markdown
Okay, we are now ready to define the training operations, starting with the losses. Labels First, we will need a placeholder for the labels:
###Code
y = tf.placeholder(shape=[None], dtype=tf.int64, name="y")
###Output
_____no_output_____
###Markdown
Margin loss The paper uses a special margin loss to make it possible to detect two or more different digits in each image:$ L_k = T_k \max(0, m^{+} - \|\mathbf{v}_k\|)^2 - \lambda (1 - T_k) \max(0, \|\mathbf{v}_k\| - m^{-})^2$* $T_k$ is equal to 1 if the digit of class $k$ is present, or 0 otherwise.* In the paper, $m^{+} = 0.9$, $m^{-} = 0.1$ and $\lambda = 0.5$.* Note that there was an error in the video (at 15:47): the max operations are squared, not the norms. Sorry about that.
###Code
m_plus = 0.9
m_minus = 0.1
lambda_ = 0.5
###Output
_____no_output_____
###Markdown
Since `y` will contain the digit classes, from 0 to 9, to get $T_k$ for every instance and every class, we can just use the `tf.one_hot()` function:
###Code
T = tf.one_hot(y, depth=caps2_n_caps, name="T")
###Output
_____no_output_____
###Markdown
A small example should make it clear what this does:
###Code
with tf.Session():
print(T.eval(feed_dict={y: np.array([0, 1, 2, 3, 9])}))
###Output
[[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]
###Markdown
Now let's compute the norm of the output vector for each output capsule and each instance. First, let's verify the shape of `caps2_output`:
###Code
caps2_output
###Output
_____no_output_____
###Markdown
The 16D output vectors are in the second to last dimension, so let's use the `safe_norm()` function with `axis=-2`:
###Code
caps2_output_norm = safe_norm(caps2_output, axis=-2, keep_dims=True,
name="caps2_output_norm")
###Output
_____no_output_____
###Markdown
Now let's compute $\max(0, m^{+} - \|\mathbf{v}_k\|)^2$, and reshape the result to get a simple matrix of shape (_batch size_, 10):
###Code
present_error_raw = tf.square(tf.maximum(0., m_plus - caps2_output_norm),
name="present_error_raw")
present_error = tf.reshape(present_error_raw, shape=(-1, 10),
name="present_error")
###Output
_____no_output_____
###Markdown
Next let's compute $\max(0, \|\mathbf{v}_k\| - m^{-})^2$ and reshape it:
###Code
absent_error_raw = tf.square(tf.maximum(0., caps2_output_norm - m_minus),
name="absent_error_raw")
absent_error = tf.reshape(absent_error_raw, shape=(-1, 10),
name="absent_error")
###Output
_____no_output_____
###Markdown
We are ready to compute the loss for each instance and each digit:
###Code
L = tf.add(T * present_error, lambda_ * (1.0 - T) * absent_error,
name="L")
###Output
_____no_output_____
###Markdown
Now we can sum the digit losses for each instance ($L_0 + L_1 + \cdots + L_9$), and compute the mean over all instances. This gives us the final margin loss:
###Code
margin_loss = tf.reduce_mean(tf.reduce_sum(L, axis=1), name="margin_loss")
###Output
_____no_output_____
###Markdown
Reconstruction Now let's add a decoder network on top of the capsule network. It is a regular 3-layer fully connected neural network which will learn to reconstruct the input images based on the output of the capsule network. This will force the capsule network to preserve all the information required to reconstruct the digits, across the whole network. This constraint regularizes the model: it reduces the risk of overfitting the training set, and it helps generalize to new digits. Mask The paper mentions that during training, instead of sending all the outputs of the capsule network to the decoder network, we must send only the output vector of the capsule that corresponds to the target digit. All the other output vectors must be masked out. At inference time, we must mask all output vectors except for the longest one, i.e., the one that corresponds to the predicted digit. You can see this in the paper's figure 2 (at 18:15 in the video): all output vectors are masked out, except for the reconstruction target's output vector. We need a placeholder to tell TensorFlow whether we want to mask the output vectors based on the labels (`True`) or on the predictions (`False`, the default):
###Code
mask_with_labels = tf.placeholder_with_default(False, shape=(),
name="mask_with_labels")
###Output
_____no_output_____
###Markdown
Now let's use `tf.cond()` to define the reconstruction targets as the labels `y` if `mask_with_labels` is `True`, or `y_pred` otherwise.
###Code
reconstruction_targets = tf.cond(mask_with_labels, # condition
lambda: y, # if True
lambda: y_pred, # if False
name="reconstruction_targets")
###Output
_____no_output_____
###Markdown
Note that the `tf.cond()` function expects the if-True and if-False tensors to be passed _via_ functions: these functions will be called just once during the graph construction phase (not during the execution phase), similar to `tf.while_loop()`. This allows TensorFlow to add the necessary operations to handle the conditional evaluation of the if-True or if-False tensors. However, in our case, the tensors `y` and `y_pred` are already created by the time we call `tf.cond()`, so unfortunately TensorFlow will consider both `y` and `y_pred` to be dependencies of the `reconstruction_targets` tensor. The `reconstruction_targets` tensor will end up with the correct value, but:1. whenever we evaluate a tensor that depends on `reconstruction_targets`, the `y_pred` tensor will be evaluated (even if `mask_with_layers` is `True`). This is not a big deal because computing `y_pred` adds no computing overhead during training, since we need it anyway to compute the margin loss. And during testing, if we are doing classification, we won't need reconstructions, so `reconstruction_targets` won't be evaluated at all.2. we will always need to feed a value for the `y` placeholder (even if `mask_with_layers` is `False`). This is a bit annoying, but we can pass an empty array, because TensorFlow won't use it anyway (it just does not know it yet when it checks for dependencies). Now that we have the reconstruction targets, let's create the reconstruction mask. It should be equal to 1.0 for the target class, and 0.0 for the other classes, for each instance. For this we can just use the `tf.one_hot()` function:
###Code
reconstruction_mask = tf.one_hot(reconstruction_targets,
depth=caps2_n_caps,
name="reconstruction_mask")
###Output
_____no_output_____
###Markdown
Let's check the shape of `reconstruction_mask`:
###Code
reconstruction_mask
###Output
_____no_output_____
###Markdown
Let's compare this to the shape of `caps2_output`:
###Code
caps2_output
###Output
_____no_output_____
###Markdown
Mmh, its shape is (_batch size_, 1, 10, 16, 1). We want to multiply it by the `reconstruction_mask`, but the shape of the `reconstruction_mask` is (_batch size_, 10). We must reshape it to (_batch size_, 1, 10, 1, 1) to make multiplication possible:
###Code
reconstruction_mask_reshaped = tf.reshape(
reconstruction_mask, [-1, 1, caps2_n_caps, 1, 1],
name="reconstruction_mask_reshaped")
###Output
_____no_output_____
###Markdown
At last! We can apply the mask:
###Code
caps2_output_masked = tf.multiply(
caps2_output, reconstruction_mask_reshaped,
name="caps2_output_masked")
caps2_output_masked
###Output
_____no_output_____
###Markdown
One last reshape operation to flatten the decoder's inputs:
###Code
decoder_input = tf.reshape(caps2_output_masked,
[-1, caps2_n_caps * caps2_n_dims],
name="decoder_input")
###Output
_____no_output_____
###Markdown
This gives us an array of shape (_batch size_, 160):
###Code
decoder_input
###Output
_____no_output_____
###Markdown
Decoder Now let's build the decoder. It's quite simple: two dense (fully connected) ReLU layers followed by a dense output sigmoid layer:
###Code
n_hidden1 = 512
n_hidden2 = 1024
n_output = 28 * 28
with tf.name_scope("decoder"):
hidden1 = tf.layers.dense(decoder_input, n_hidden1,
activation=tf.nn.relu,
name="hidden1")
hidden2 = tf.layers.dense(hidden1, n_hidden2,
activation=tf.nn.relu,
name="hidden2")
decoder_output = tf.layers.dense(hidden2, n_output,
activation=tf.nn.sigmoid,
name="decoder_output")
###Output
_____no_output_____
###Markdown
Reconstruction Loss Now let's compute the reconstruction loss. It is just the squared difference between the input image and the reconstructed image:
###Code
X_flat = tf.reshape(X, [-1, n_output], name="X_flat")
squared_difference = tf.square(X_flat - decoder_output,
name="squared_difference")
reconstruction_loss = tf.reduce_sum(squared_difference,
name="reconstruction_loss")
###Output
_____no_output_____
###Markdown
Final Loss The final loss is the sum of the margin loss and the reconstruction loss (scaled down by a factor of 0.0005 to ensure the margin loss dominates training):
###Code
alpha = 0.0005
loss = tf.add(margin_loss, alpha * reconstruction_loss, name="loss")
###Output
_____no_output_____
###Markdown
Final Touches Accuracy To measure our model's accuracy, we need to count the number of instances that are properly classified. For this, we can simply compare `y` and `y_pred`, convert the boolean value to a float32 (0.0 for False, 1.0 for True), and compute the mean over all the instances:
###Code
correct = tf.equal(y, y_pred, name="correct")
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy")
###Output
_____no_output_____
###Markdown
Training Operations The paper mentions that the authors used the Adam optimizer with TensorFlow's default parameters:
###Code
optimizer = tf.train.AdamOptimizer()
training_op = optimizer.minimize(loss, name="training_op")
###Output
_____no_output_____
###Markdown
Init and Saver And let's add the usual variable initializer, as well as a `Saver`:
###Code
init = tf.global_variables_initializer()
saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
And... we're done with the construction phase! Please take a moment to celebrate. :) Training Training our capsule network is pretty standard. For simplicity, we won't do any fancy hyperparameter tuning, dropout or anything, we will just run the training operation over and over again, displaying the loss, and at the end of each epoch, measure the accuracy on the validation set, display it, and save the model if the validation loss is the lowest seen found so far (this is a basic way to implement early stopping, without actually stopping). Hopefully the code should be self-explanatory, but here are a few details to note:* if a checkpoint file exists, it will be restored (this makes it possible to interrupt training, then restart it later from the last checkpoint),* we must not forget to feed `mask_with_labels=True` during training,* during testing, we let `mask_with_labels` default to `False` (but we still feed the labels since they are required to compute the accuracy),* the images loaded _via_ `mnist.train.next_batch()` are represented as `float32` arrays of shape \[784\], but the input placeholder `X` expects a `float32` array of shape \[28, 28, 1\], so we must reshape the images before we feed them to our model,* we evaluate the model's loss and accuracy on the full validation set (5,000 instances). To view progress and support systems that don't have a lot of RAM, the code evaluates the loss and accuracy on one batch at a time, and computes the mean loss and mean accuracy at the end.*Warning*: if you don't have a GPU, training will take a very long time (at least a few hours). With a GPU, it should take just a few minutes per epoch (e.g., 6 minutes on an NVidia GeForce GTX 1080Ti).
###Code
n_epochs = 10
batch_size = 50
restore_checkpoint = True
n_iterations_per_epoch = mnist.train.num_examples // batch_size
n_iterations_validation = mnist.validation.num_examples // batch_size
best_loss_val = np.infty
checkpoint_path = "./my_capsule_network"
with tf.Session() as sess:
if restore_checkpoint and tf.train.checkpoint_exists(checkpoint_path):
saver.restore(sess, checkpoint_path)
else:
init.run()
for epoch in range(n_epochs):
for iteration in range(1, n_iterations_per_epoch + 1):
X_batch, y_batch = mnist.train.next_batch(batch_size)
# Run the training operation and measure the loss:
_, loss_train = sess.run(
[training_op, loss],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch,
mask_with_labels: True})
print("\rIteration: {}/{} ({:.1f}%) Loss: {:.5f}".format(
iteration, n_iterations_per_epoch,
iteration * 100 / n_iterations_per_epoch,
loss_train),
end="")
# At the end of each epoch,
# measure the validation loss and accuracy:
loss_vals = []
acc_vals = []
for iteration in range(1, n_iterations_validation + 1):
X_batch, y_batch = mnist.validation.next_batch(batch_size)
loss_val, acc_val = sess.run(
[loss, accuracy],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch})
loss_vals.append(loss_val)
acc_vals.append(acc_val)
print("\rEvaluating the model: {}/{} ({:.1f}%)".format(
iteration, n_iterations_validation,
iteration * 100 / n_iterations_validation),
end=" " * 10)
loss_val = np.mean(loss_vals)
acc_val = np.mean(acc_vals)
print("\rEpoch: {} Val accuracy: {:.4f}% Loss: {:.6f}{}".format(
epoch + 1, acc_val * 100, loss_val,
" (improved)" if loss_val < best_loss_val else ""))
# And save the model if it improved:
if loss_val < best_loss_val:
save_path = saver.save(sess, checkpoint_path)
best_loss_val = loss_val
###Output
Epoch: 1 Val accuracy: 98.7000% Loss: 0.416563 (improved)
Epoch: 2 Val accuracy: 99.0400% Loss: 0.291740 (improved)
Epoch: 3 Val accuracy: 99.1200% Loss: 0.241666 (improved)
Epoch: 4 Val accuracy: 99.2800% Loss: 0.211442 (improved)
Epoch: 5 Val accuracy: 99.3200% Loss: 0.196026 (improved)
Epoch: 6 Val accuracy: 99.3600% Loss: 0.186166 (improved)
Epoch: 7 Val accuracy: 99.3400% Loss: 0.179290 (improved)
Epoch: 8 Val accuracy: 99.3800% Loss: 0.173593 (improved)
Epoch: 9 Val accuracy: 99.3600% Loss: 0.169071 (improved)
Epoch: 10 Val accuracy: 99.3400% Loss: 0.165477 (improved)
###Markdown
Training is finished, we reached over 99.3% accuracy on the validation set after just 5 epochs, things are looking good. Now let's evaluate the model on the test set. Evaluation
###Code
n_iterations_test = mnist.test.num_examples // batch_size
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
loss_tests = []
acc_tests = []
for iteration in range(1, n_iterations_test + 1):
X_batch, y_batch = mnist.test.next_batch(batch_size)
loss_test, acc_test = sess.run(
[loss, accuracy],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch})
loss_tests.append(loss_test)
acc_tests.append(acc_test)
print("\rEvaluating the model: {}/{} ({:.1f}%)".format(
iteration, n_iterations_test,
iteration * 100 / n_iterations_test),
end=" " * 10)
loss_test = np.mean(loss_tests)
acc_test = np.mean(acc_tests)
print("\rFinal test accuracy: {:.4f}% Loss: {:.6f}".format(
acc_test * 100, loss_test))
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
Final test accuracy: 99.4300% Loss: 0.165047
###Markdown
We reach 99.43% accuracy on the test set. Pretty nice. :) Predictions Now let's make some predictions! We first fix a few images from the test set, then we start a session, restore the trained model, evaluate `caps2_output` to get the capsule network's output vectors, `decoder_output` to get the reconstructions, and `y_pred` to get the class predictions:
###Code
n_samples = 5
sample_images = mnist.test.images[:n_samples].reshape([-1, 28, 28, 1])
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
caps2_output_value, decoder_output_value, y_pred_value = sess.run(
[caps2_output, decoder_output, y_pred],
feed_dict={X: sample_images,
y: np.array([], dtype=np.int64)})
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
###Markdown
Note: we feed `y` with an empty array, but TensorFlow will not use it, as explained earlier. And now let's plot the images and their labels, followed by the corresponding reconstructions and predictions:
###Code
sample_images = sample_images.reshape(-1, 28, 28)
reconstructions = decoder_output_value.reshape([-1, 28, 28])
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
plt.imshow(sample_images[index], cmap="binary")
plt.title("Label:" + str(mnist.test.labels[index]))
plt.axis("off")
plt.show()
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
plt.title("Predicted:" + str(y_pred_value[index]))
plt.imshow(reconstructions[index], cmap="binary")
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
The predictions are all correct, and the reconstructions look great. Hurray! Interpreting the Output Vectors Let's tweak the output vectors to see what their pose parameters represent. First, let's check the shape of the `cap2_output_value` NumPy array:
###Code
caps2_output_value.shape
###Output
_____no_output_____
###Markdown
Let's create a function that will tweak each of the 16 pose parameters (dimensions) in all output vectors. Each tweaked output vector will be identical to the original output vector, except that one of its pose parameters will be incremented by a value varying from -0.5 to 0.5. By default there will be 11 steps (-0.5, -0.4, ..., +0.4, +0.5). This function will return an array of shape (_tweaked pose parameters_=16, _steps_=11, _batch size_=5, 1, 10, 16, 1):
###Code
def tweak_pose_parameters(output_vectors, min=-0.5, max=0.5, n_steps=11):
steps = np.linspace(min, max, n_steps) # -0.25, -0.15, ..., +0.25
pose_parameters = np.arange(caps2_n_dims) # 0, 1, ..., 15
tweaks = np.zeros([caps2_n_dims, n_steps, 1, 1, 1, caps2_n_dims, 1])
tweaks[pose_parameters, :, 0, 0, 0, pose_parameters, 0] = steps
output_vectors_expanded = output_vectors[np.newaxis, np.newaxis]
return tweaks + output_vectors_expanded
###Output
_____no_output_____
###Markdown
Let's compute all the tweaked output vectors and reshape the result to (_parameters_×_steps_×_instances_, 1, 10, 16, 1) so we can feed the array to the decoder:
###Code
n_steps = 11
tweaked_vectors = tweak_pose_parameters(caps2_output_value, n_steps=n_steps)
tweaked_vectors_reshaped = tweaked_vectors.reshape(
[-1, 1, caps2_n_caps, caps2_n_dims, 1])
###Output
_____no_output_____
###Markdown
Now let's feed these tweaked output vectors to the decoder and get the reconstructions it produces:
###Code
tweak_labels = np.tile(mnist.test.labels[:n_samples], caps2_n_dims * n_steps)
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
decoder_output_value = sess.run(
decoder_output,
feed_dict={caps2_output: tweaked_vectors_reshaped,
mask_with_labels: True,
y: tweak_labels})
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
###Markdown
Let's reshape the decoder's output so we can easily iterate on the output dimension, the tweak steps, and the instances:
###Code
tweak_reconstructions = decoder_output_value.reshape(
[caps2_n_dims, n_steps, n_samples, 28, 28])
###Output
_____no_output_____
###Markdown
Lastly, let's plot all the reconstructions, for the first 3 output dimensions, for each tweaking step (column) and each digit (row):
###Code
for dim in range(3):
print("Tweaking output dimension #{}".format(dim))
plt.figure(figsize=(n_steps / 1.2, n_samples / 1.5))
for row in range(n_samples):
for col in range(n_steps):
plt.subplot(n_samples, n_steps, row * n_steps + col + 1)
plt.imshow(tweak_reconstructions[dim, col, row], cmap="binary")
plt.axis("off")
plt.show()
###Output
Tweaking output dimension #0
###Markdown
Capsule Networks (CapsNets) Based on the paper: [Dynamic Routing Between Capsules](https://arxiv.org/abs/1710.09829), by Sara Sabour, Nicholas Frosst and Geoffrey E. Hinton (NIPS 2017). Inspired in part from Huadong Liao's implementation: [CapsNet-TensorFlow](https://github.com/naturomics/CapsNet-Tensorflow). Introduction Watch [this video](https://youtu.be/pPN8d0E3900) to understand the key ideas behind Capsule Networks:
###Code
from IPython.display import HTML
HTML("""<iframe width="560" height="315" src="https://www.youtube.com/embed/pPN8d0E3900" frameborder="0" allowfullscreen></iframe>""")
###Output
_____no_output_____
###Markdown
You may also want to watch [this video](https://youtu.be/2Kawrd5szHE), which presents the main difficulties in this notebook:
###Code
HTML("""<iframe width="560" height="315" src="https://www.youtube.com/embed/2Kawrd5szHE" frameborder="0" allowfullscreen></iframe>""")
###Output
_____no_output_____
###Markdown
Imports To support both Python 2 and Python 3:
###Code
from __future__ import division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
To plot pretty figures:
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
We will need NumPy and TensorFlow:
###Code
import numpy as np
import tensorflow as tf
###Output
/Users/ageron/.virtualenvs/ml/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(*args, **kwds)
###Markdown
Reproducibility Let's reset the default graph, in case you re-run this notebook without restarting the kernel:
###Code
tf.reset_default_graph()
###Output
_____no_output_____
###Markdown
Let's set the random seeds so that this notebook always produces the same output:
###Code
np.random.seed(42)
tf.set_random_seed(42)
###Output
_____no_output_____
###Markdown
Load MNIST Yes, I know, it's MNIST again. But hopefully this powerful idea will work as well on larger datasets, time will tell.
###Code
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
###Output
Extracting /tmp/data/train-images-idx3-ubyte.gz
Extracting /tmp/data/train-labels-idx1-ubyte.gz
Extracting /tmp/data/t10k-images-idx3-ubyte.gz
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
###Markdown
Let's look at what these hand-written digit images look like:
###Code
n_samples = 5
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
sample_image = mnist.train.images[index].reshape(28, 28)
plt.imshow(sample_image, cmap="binary")
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
And these are the corresponding labels:
###Code
mnist.train.labels[:n_samples]
###Output
_____no_output_____
###Markdown
Now let's build a Capsule Network to classify these images. Here's the overall architecture, enjoy the ASCII art! ;-)Note: for readability, I left out two arrows: Labels → Mask, and Input Images → Reconstruction Loss. ``` Loss ↑ ┌─────────┴─────────┐ Labels → Margin Loss Reconstruction Loss ↑ ↑ Length Decoder ↑ ↑ Digit Capsules ────Mask────┘ ↖↑↗ ↖↑↗ ↖↑↗ Primary Capsules ↑ Input Images``` We are going to build the graph starting from the bottom layer, and gradually move up, left side first. Let's go! Input Images Let's start by creating a placeholder for the input images (28×28 pixels, 1 color channel = grayscale).
###Code
X = tf.placeholder(shape=[None, 28, 28, 1], dtype=tf.float32, name="X")
###Output
_____no_output_____
###Markdown
Primary Capsules The first layer will be composed of 32 maps of 6×6 capsules each, where each capsule will output an 8D activation vector:
###Code
caps1_n_maps = 32
caps1_n_caps = caps1_n_maps * 6 * 6 # 1152 primary capsules
caps1_n_dims = 8
###Output
_____no_output_____
###Markdown
To compute their outputs, we first apply two regular convolutional layers:
###Code
conv1_params = {
"filters": 256,
"kernel_size": 9,
"strides": 1,
"padding": "valid",
"activation": tf.nn.relu,
}
conv2_params = {
"filters": caps1_n_maps * caps1_n_dims, # 256 convolutional filters
"kernel_size": 9,
"strides": 2,
"padding": "valid",
"activation": tf.nn.relu
}
conv1 = tf.layers.conv2d(X, name="conv1", **conv1_params)
conv2 = tf.layers.conv2d(conv1, name="conv2", **conv2_params)
###Output
_____no_output_____
###Markdown
Note: since we used a kernel size of 9 and no padding (for some reason, that's what `"valid"` means), the image shrunk by 9-1=8 pixels after each convolutional layer (28×28 to 20×20, then 20×20 to 12×12), and since we used a stride of 2 in the second convolutional layer, the image size was divided by 2. This is how we end up with 6×6 feature maps. Next, we reshape the output to get a bunch of 8D vectors representing the outputs of the primary capsules. The output of `conv2` is an array containing 32×8=256 feature maps for each instance, where each feature map is 6×6. So the shape of this output is (_batch size_, 6, 6, 256). We want to chop the 256 into 32 vectors of 8 dimensions each. We could do this by reshaping to (_batch size_, 6, 6, 32, 8). However, since this first capsule layer will be fully connected to the next capsule layer, we can simply flatten the 6×6 grids. This means we just need to reshape to (_batch size_, 6×6×32, 8).
###Code
caps1_raw = tf.reshape(conv2, [-1, caps1_n_caps, caps1_n_dims],
name="caps1_raw")
###Output
_____no_output_____
###Markdown
Now we need to squash these vectors. Let's define the `squash()` function, based on equation (1) from the paper:$\operatorname{squash}(\mathbf{s}) = \dfrac{\|\mathbf{s}\|^2}{1 + \|\mathbf{s}\|^2} \dfrac{\mathbf{s}}{\|\mathbf{s}\|}$The `squash()` function will squash all vectors in the given array, along the given axis (by default, the last axis).**Caution**, a nasty bug is waiting to bite you: the derivative of $\|\mathbf{s}\|$ is undefined when $\|\mathbf{s}\|=0$, so we can't just use `tf.norm()`, or else it will blow up during training: if a vector is zero, the gradients will be `nan`, so when the optimizer updates the variables, they will also become `nan`, and from then on you will be stuck in `nan` land. The solution is to implement the norm manually by computing the square root of the sum of squares plus a tiny epsilon value: $\|\mathbf{s}\| \approx \sqrt{\sum\limits_i{{s_i}^2}\,\,+ \epsilon}$.
###Code
def squash(s, axis=-1, epsilon=1e-7, name=None):
with tf.name_scope(name, default_name="squash"):
squared_norm = tf.reduce_sum(tf.square(s), axis=axis,
keep_dims=True)
safe_norm = tf.sqrt(squared_norm + epsilon)
squash_factor = squared_norm / (1. + squared_norm)
unit_vector = s / safe_norm
return squash_factor * unit_vector
###Output
_____no_output_____
###Markdown
Now let's apply this function to get the output $\mathbf{u}_i$ of each primary capsules $i$ :
###Code
caps1_output = squash(caps1_raw, name="caps1_output")
###Output
_____no_output_____
###Markdown
Great! We have the output of the first capsule layer. It wasn't too hard, was it? However, computing the next layer is where the fun really begins. Digit Capsules To compute the output of the digit capsules, we must first compute the predicted output vectors (one for each primary / digit capsule pair). Then we can run the routing by agreement algorithm. Compute the Predicted Output Vectors The digit capsule layer contains 10 capsules (one for each digit) of 16 dimensions each:
###Code
caps2_n_caps = 10
caps2_n_dims = 16
###Output
_____no_output_____
###Markdown
For each capsule $i$ in the first layer, we want to predict the output of every capsule $j$ in the second layer. For this, we will need a transformation matrix $\mathbf{W}_{i,j}$ (one for each pair of capsules ($i$, $j$)), then we can compute the predicted output $\hat{\mathbf{u}}_{j|i} = \mathbf{W}_{i,j} \, \mathbf{u}_i$ (equation (2)-right in the paper). Since we want to transform an 8D vector into a 16D vector, each transformation matrix $\mathbf{W}_{i,j}$ must have a shape of (16, 8). To compute $\hat{\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$), we will use a nice feature of the `tf.matmul()` function: you probably know that it lets you multiply two matrices, but you may not know that it also lets you multiply higher dimensional arrays. It treats the arrays as arrays of matrices, and it performs itemwise matrix multiplication. For example, suppose you have two 4D arrays, each containing a 2×3 grid of matrices. The first contains matrices $\mathbf{A}, \mathbf{B}, \mathbf{C}, \mathbf{D}, \mathbf{E}, \mathbf{F}$ and the second contains matrices $\mathbf{G}, \mathbf{H}, \mathbf{I}, \mathbf{J}, \mathbf{K}, \mathbf{L}$. If you multiply these two 4D arrays using the `tf.matmul()` function, this is what you get:$\pmatrix{\mathbf{A} & \mathbf{B} & \mathbf{C} \\\mathbf{D} & \mathbf{E} & \mathbf{F}} \times\pmatrix{\mathbf{G} & \mathbf{H} & \mathbf{I} \\\mathbf{J} & \mathbf{K} & \mathbf{L}} = \pmatrix{\mathbf{AG} & \mathbf{BH} & \mathbf{CI} \\\mathbf{DJ} & \mathbf{EK} & \mathbf{FL}}$ We can apply this function to compute $\hat{\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$) like this (recall that there are 6×6×32=1152 capsules in the first layer, and 10 in the second layer):$\pmatrix{ \mathbf{W}_{1,1} & \mathbf{W}_{1,2} & \cdots & \mathbf{W}_{1,10} \\ \mathbf{W}_{2,1} & \mathbf{W}_{2,2} & \cdots & \mathbf{W}_{2,10} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{W}_{1152,1} & \mathbf{W}_{1152,2} & \cdots & \mathbf{W}_{1152,10}} \times\pmatrix{ \mathbf{u}_1 & \mathbf{u}_1 & \cdots & \mathbf{u}_1 \\ \mathbf{u}_2 & \mathbf{u}_2 & \cdots & \mathbf{u}_2 \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{u}_{1152} & \mathbf{u}_{1152} & \cdots & \mathbf{u}_{1152}}=\pmatrix{\hat{\mathbf{u}}_{1|1} & \hat{\mathbf{u}}_{2|1} & \cdots & \hat{\mathbf{u}}_{10|1} \\\hat{\mathbf{u}}_{1|2} & \hat{\mathbf{u}}_{2|2} & \cdots & \hat{\mathbf{u}}_{10|2} \\\vdots & \vdots & \ddots & \vdots \\\hat{\mathbf{u}}_{1|1152} & \hat{\mathbf{u}}_{2|1152} & \cdots & \hat{\mathbf{u}}_{10|1152}}$ The shape of the first array is (1152, 10, 16, 8), and the shape of the second array is (1152, 10, 8, 1). Note that the second array must contain 10 identical copies of the vectors $\mathbf{u}_1$ to $\mathbf{u}_{1152}$. To create this array, we will use the handy `tf.tile()` function, which lets you create an array containing many copies of a base array, tiled in any way you want. Oh, wait a second! We forgot one dimension: _batch size_. Say we feed 50 images to the capsule network, it will make predictions for these 50 images simultaneously. So the shape of the first array must be (50, 1152, 10, 16, 8), and the shape of the second array must be (50, 1152, 10, 8, 1). The first layer capsules actually already output predictions for all 50 images, so the second array will be fine, but for the first array, we will need to use `tf.tile()` to have 50 copies of the transformation matrices. Okay, let's start by creating a trainable variable of shape (1, 1152, 10, 16, 8) that will hold all the transformation matrices. The first dimension of size 1 will make this array easy to tile. We initialize this variable randomly using a normal distribution with a standard deviation to 0.1.
###Code
init_sigma = 0.1
W_init = tf.random_normal(
shape=(1, caps1_n_caps, caps2_n_caps, caps2_n_dims, caps1_n_dims),
stddev=init_sigma, dtype=tf.float32, name="W_init")
W = tf.Variable(W_init, name="W")
###Output
_____no_output_____
###Markdown
Now we can create the first array by repeating `W` once per instance:
###Code
batch_size = tf.shape(X)[0]
W_tiled = tf.tile(W, [batch_size, 1, 1, 1, 1], name="W_tiled")
###Output
_____no_output_____
###Markdown
That's it! On to the second array, now. As discussed earlier, we need to create an array of shape (_batch size_, 1152, 10, 8, 1), containing the output of the first layer capsules, repeated 10 times (once per digit, along the third dimension, which is axis=2). The `caps1_output` array has a shape of (_batch size_, 1152, 8), so we first need to expand it twice, to get an array of shape (_batch size_, 1152, 1, 8, 1), then we can repeat it 10 times along the third dimension:
###Code
caps1_output_expanded = tf.expand_dims(caps1_output, -1,
name="caps1_output_expanded")
caps1_output_tile = tf.expand_dims(caps1_output_expanded, 2,
name="caps1_output_tile")
caps1_output_tiled = tf.tile(caps1_output_tile, [1, 1, caps2_n_caps, 1, 1],
name="caps1_output_tiled")
###Output
_____no_output_____
###Markdown
Let's check the shape of the first array:
###Code
W_tiled
###Output
_____no_output_____
###Markdown
Good, and now the second:
###Code
caps1_output_tiled
###Output
_____no_output_____
###Markdown
Yes! Now, to get all the predicted output vectors $\hat{\mathbf{u}}_{j|i}$, we just need to multiply these two arrays using `tf.matmul()`, as explained earlier:
###Code
caps2_predicted = tf.matmul(W_tiled, caps1_output_tiled,
name="caps2_predicted")
###Output
_____no_output_____
###Markdown
Let's check the shape:
###Code
caps2_predicted
###Output
_____no_output_____
###Markdown
Perfect, for each instance in the batch (we don't know the batch size yet, hence the "?") and for each pair of first and second layer capsules (1152×10) we have a 16D predicted output column vector (16×1). We're ready to apply the routing by agreement algorithm! Routing by agreement First let's initialize the raw routing weights $b_{i,j}$ to zero:
###Code
raw_weights = tf.zeros([batch_size, caps1_n_caps, caps2_n_caps, 1, 1],
dtype=np.float32, name="raw_weights")
###Output
_____no_output_____
###Markdown
We will see why we need the last two dimensions of size 1 in a minute. Round 1 First, let's apply the softmax function to compute the routing weights, $\mathbf{c}_{i} = \operatorname{softmax}(\mathbf{b}_i)$ (equation (3) in the paper):
###Code
routing_weights = tf.nn.softmax(raw_weights, dim=2, name="routing_weights")
###Output
_____no_output_____
###Markdown
Now let's compute the weighted sum of all the predicted output vectors for each second-layer capsule, $\mathbf{s}_j = \sum\limits_{i}{c_{i,j}\hat{\mathbf{u}}_{j|i}}$ (equation (2)-left in the paper):
###Code
weighted_predictions = tf.multiply(routing_weights, caps2_predicted,
name="weighted_predictions")
weighted_sum = tf.reduce_sum(weighted_predictions, axis=1, keep_dims=True,
name="weighted_sum")
###Output
_____no_output_____
###Markdown
There are a couple important details to note here:* To perform elementwise matrix multiplication (also called the Hadamard product, noted $\circ$), we use the `tf.multiply()` function. It requires `routing_weights` and `caps2_predicted` to have the same rank, which is why we added two extra dimensions of size 1 to `routing_weights`, earlier.* The shape of `routing_weights` is (_batch size_, 1152, 10, 1, 1) while the shape of `caps2_predicted` is (_batch size_, 1152, 10, 16, 1). Since they don't match on the fourth dimension (1 _vs_ 16), `tf.multiply()` automatically _broadcasts_ the `routing_weights` 16 times along that dimension. If you are not familiar with broadcasting, a simple example might help: $ \pmatrix{1 & 2 & 3 \\ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000} = \pmatrix{1 & 2 & 3 \\ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000 \\ 10 & 100 & 1000} = \pmatrix{10 & 200 & 3000 \\ 40 & 500 & 6000} $ And finally, let's apply the squash function to get the outputs of the second layer capsules at the end of the first iteration of the routing by agreement algorithm, $\mathbf{v}_j = \operatorname{squash}(\mathbf{s}_j)$ :
###Code
caps2_output_round_1 = squash(weighted_sum, axis=-2,
name="caps2_output_round_1")
caps2_output_round_1
###Output
_____no_output_____
###Markdown
Good! We have ten 16D output vectors for each instance, as expected. Round 2 First, let's measure how close each predicted vector $\hat{\mathbf{u}}_{j|i}$ is to the actual output vector $\mathbf{v}_j$ by computing their scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$. * Quick math reminder: if $\vec{a}$ and $\vec{b}$ are two vectors of equal length, and $\mathbf{a}$ and $\mathbf{b}$ are their corresponding column vectors (i.e., matrices with a single column), then $\mathbf{a}^T \mathbf{b}$ (i.e., the matrix multiplication of the transpose of $\mathbf{a}$, and $\mathbf{b}$) is a 1×1 matrix containing the scalar product of the two vectors $\vec{a}\cdot\vec{b}$. In Machine Learning, we generally represent vectors as column vectors, so when we talk about computing the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$, this actually means computing ${\hat{\mathbf{u}}_{j|i}}^T \mathbf{v}_j$. Since we need to compute the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ for each instance, and for each pair of first and second level capsules $(i, j)$, we will once again take advantage of the fact that `tf.matmul()` can multiply many matrices simultaneously. This will require playing around with `tf.tile()` to get all dimensions to match (except for the last 2), just like we did earlier. So let's look at the shape of `caps2_predicted`, which holds all the predicted output vectors $\hat{\mathbf{u}}_{j|i}$ for each instance and each pair of capsules:
###Code
caps2_predicted
###Output
_____no_output_____
###Markdown
And now let's look at the shape of `caps2_output_round_1`, which holds 10 outputs vectors of 16D each, for each instance:
###Code
caps2_output_round_1
###Output
_____no_output_____
###Markdown
To get these shapes to match, we just need to tile the `caps2_output_round_1` array 1152 times (once per primary capsule) along the second dimension:
###Code
caps2_output_round_1_tiled = tf.tile(
caps2_output_round_1, [1, caps1_n_caps, 1, 1, 1],
name="caps2_output_round_1_tiled")
###Output
_____no_output_____
###Markdown
And now we are ready to call `tf.matmul()` (note that we must tell it to transpose the matrices in the first array, to get ${\hat{\mathbf{u}}_{j|i}}^T$ instead of $\hat{\mathbf{u}}_{j|i}$):
###Code
agreement = tf.matmul(caps2_predicted, caps2_output_round_1_tiled,
transpose_a=True, name="agreement")
###Output
_____no_output_____
###Markdown
We can now update the raw routing weights $b_{i,j}$ by simply adding the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ we just computed: $b_{i,j} \gets b_{i,j} + \hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ (see Procedure 1, step 7, in the paper).
###Code
raw_weights_round_2 = tf.add(raw_weights, agreement,
name="raw_weights_round_2")
###Output
_____no_output_____
###Markdown
The rest of round 2 is the same as in round 1:
###Code
routing_weights_round_2 = tf.nn.softmax(raw_weights_round_2,
dim=2,
name="routing_weights_round_2")
weighted_predictions_round_2 = tf.multiply(routing_weights_round_2,
caps2_predicted,
name="weighted_predictions_round_2")
weighted_sum_round_2 = tf.reduce_sum(weighted_predictions_round_2,
axis=1, keep_dims=True,
name="weighted_sum_round_2")
caps2_output_round_2 = squash(weighted_sum_round_2,
axis=-2,
name="caps2_output_round_2")
###Output
_____no_output_____
###Markdown
We could go on for a few more rounds, by repeating exactly the same steps as in round 2, but to keep things short, we will stop here:
###Code
caps2_output = caps2_output_round_2
###Output
_____no_output_____
###Markdown
Static or Dynamic Loop? In the code above, we created different operations in the TensorFlow graph for each round of the routing by agreement algorithm. In other words, it's a static loop.Sure, instead of copy/pasting the code several times, we could have written a `for` loop in Python, but this would not change the fact that the graph would end up containing different operations for each routing iteration. It's actually okay since we generally want less than 5 routing iterations, so the graph won't grow too big.However, you may prefer to implement the routing loop within the TensorFlow graph itself rather than using a Python `for` loop. To do this, you would need to use TensorFlow's `tf.while_loop()` function. This way, all routing iterations would reuse the same operations in the graph, it would be a dynamic loop.For example, here is how to build a small loop that computes the sum of squares from 1 to 100:
###Code
def condition(input, counter):
return tf.less(counter, 100)
def loop_body(input, counter):
output = tf.add(input, tf.square(counter))
return output, tf.add(counter, 1)
with tf.name_scope("compute_sum_of_squares"):
counter = tf.constant(1)
sum_of_squares = tf.constant(0)
result = tf.while_loop(condition, loop_body, [sum_of_squares, counter])
with tf.Session() as sess:
print(sess.run(result))
###Output
(328350, 100)
###Markdown
As you can see, the `tf.while_loop()` function expects the loop condition and body to be provided _via_ two functions. These functions will be called only once by TensorFlow, during the graph construction phase, _not_ while executing the graph. The `tf.while_loop()` function stitches together the graph fragments created by `condition()` and `loop_body()` with some additional operations to create the loop.Also note that during training, TensorFlow will automagically handle backpropogation through the loop, so you don't need to worry about that. Of course, we could have used this one-liner instead! ;-)
###Code
sum([i**2 for i in range(1, 100 + 1)])
###Output
_____no_output_____
###Markdown
Joke aside, apart from reducing the graph size, using a dynamic loop instead of a static loop can help reduce how much GPU RAM you use (if you are using a GPU). Indeed, if you set `swap_memory=True` when calling the `tf.while_loop()` function, TensorFlow will automatically check GPU RAM usage at each loop iteration, and it will take care of swapping memory between the GPU and the CPU when needed. Since CPU memory is much cheaper and abundant than GPU RAM, this can really make a big difference. Estimated Class Probabilities (Length) The lengths of the output vectors represent the class probabilities, so we could just use `tf.norm()` to compute them, but as we saw when discussing the squash function, it would be risky, so instead let's create our own `safe_norm()` function:
###Code
def safe_norm(s, axis=-1, epsilon=1e-7, keep_dims=False, name=None):
with tf.name_scope(name, default_name="safe_norm"):
squared_norm = tf.reduce_sum(tf.square(s), axis=axis,
keep_dims=keep_dims)
return tf.sqrt(squared_norm + epsilon)
y_proba = safe_norm(caps2_output, axis=-2, name="y_proba")
###Output
_____no_output_____
###Markdown
To predict the class of each instance, we can just select the one with the highest estimated probability. To do this, let's start by finding its index using `tf.argmax()`:
###Code
y_proba_argmax = tf.argmax(y_proba, axis=2, name="y_proba")
###Output
_____no_output_____
###Markdown
Let's look at the shape of `y_proba_argmax`:
###Code
y_proba_argmax
###Output
_____no_output_____
###Markdown
That's what we wanted: for each instance, we now have the index of the longest output vector. Let's get rid of the last two dimensions by using `tf.squeeze()` which removes dimensions of size 1. This gives us the capsule network's predicted class for each instance:
###Code
y_pred = tf.squeeze(y_proba_argmax, axis=[1,2], name="y_pred")
y_pred
###Output
_____no_output_____
###Markdown
Okay, we are now ready to define the training operations, starting with the losses. Labels First, we will need a placeholder for the labels:
###Code
y = tf.placeholder(shape=[None], dtype=tf.int64, name="y")
###Output
_____no_output_____
###Markdown
Margin loss The paper uses a special margin loss to make it possible to detect two or more different digits in each image:$ L_k = T_k \max(0, m^{+} - \|\mathbf{v}_k\|)^2 + \lambda (1 - T_k) \max(0, \|\mathbf{v}_k\| - m^{-})^2$* $T_k$ is equal to 1 if the digit of class $k$ is present, or 0 otherwise.* In the paper, $m^{+} = 0.9$, $m^{-} = 0.1$ and $\lambda = 0.5$.* Note that there was an error in the video (at 15:47): the max operations are squared, not the norms. Sorry about that.
###Code
m_plus = 0.9
m_minus = 0.1
lambda_ = 0.5
###Output
_____no_output_____
###Markdown
Since `y` will contain the digit classes, from 0 to 9, to get $T_k$ for every instance and every class, we can just use the `tf.one_hot()` function:
###Code
T = tf.one_hot(y, depth=caps2_n_caps, name="T")
###Output
_____no_output_____
###Markdown
A small example should make it clear what this does:
###Code
with tf.Session():
print(T.eval(feed_dict={y: np.array([0, 1, 2, 3, 9])}))
###Output
[[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]
###Markdown
Now let's compute the norm of the output vector for each output capsule and each instance. First, let's verify the shape of `caps2_output`:
###Code
caps2_output
###Output
_____no_output_____
###Markdown
The 16D output vectors are in the second to last dimension, so let's use the `safe_norm()` function with `axis=-2`:
###Code
caps2_output_norm = safe_norm(caps2_output, axis=-2, keep_dims=True,
name="caps2_output_norm")
###Output
_____no_output_____
###Markdown
Now let's compute $\max(0, m^{+} - \|\mathbf{v}_k\|)^2$, and reshape the result to get a simple matrix of shape (_batch size_, 10):
###Code
present_error_raw = tf.square(tf.maximum(0., m_plus - caps2_output_norm),
name="present_error_raw")
present_error = tf.reshape(present_error_raw, shape=(-1, 10),
name="present_error")
###Output
_____no_output_____
###Markdown
Next let's compute $\max(0, \|\mathbf{v}_k\| - m^{-})^2$ and reshape it:
###Code
absent_error_raw = tf.square(tf.maximum(0., caps2_output_norm - m_minus),
name="absent_error_raw")
absent_error = tf.reshape(absent_error_raw, shape=(-1, 10),
name="absent_error")
###Output
_____no_output_____
###Markdown
We are ready to compute the loss for each instance and each digit:
###Code
L = tf.add(T * present_error, lambda_ * (1.0 - T) * absent_error,
name="L")
###Output
_____no_output_____
###Markdown
Now we can sum the digit losses for each instance ($L_0 + L_1 + \cdots + L_9$), and compute the mean over all instances. This gives us the final margin loss:
###Code
margin_loss = tf.reduce_mean(tf.reduce_sum(L, axis=1), name="margin_loss")
###Output
_____no_output_____
###Markdown
Reconstruction Now let's add a decoder network on top of the capsule network. It is a regular 3-layer fully connected neural network which will learn to reconstruct the input images based on the output of the capsule network. This will force the capsule network to preserve all the information required to reconstruct the digits, across the whole network. This constraint regularizes the model: it reduces the risk of overfitting the training set, and it helps generalize to new digits. Mask The paper mentions that during training, instead of sending all the outputs of the capsule network to the decoder network, we must send only the output vector of the capsule that corresponds to the target digit. All the other output vectors must be masked out. At inference time, we must mask all output vectors except for the longest one, i.e., the one that corresponds to the predicted digit. You can see this in the paper's figure 2 (at 18:15 in the video): all output vectors are masked out, except for the reconstruction target's output vector. We need a placeholder to tell TensorFlow whether we want to mask the output vectors based on the labels (`True`) or on the predictions (`False`, the default):
###Code
mask_with_labels = tf.placeholder_with_default(False, shape=(),
name="mask_with_labels")
###Output
_____no_output_____
###Markdown
Now let's use `tf.cond()` to define the reconstruction targets as the labels `y` if `mask_with_labels` is `True`, or `y_pred` otherwise.
###Code
reconstruction_targets = tf.cond(mask_with_labels, # condition
lambda: y, # if True
lambda: y_pred, # if False
name="reconstruction_targets")
###Output
_____no_output_____
###Markdown
Note that the `tf.cond()` function expects the if-True and if-False tensors to be passed _via_ functions: these functions will be called just once during the graph construction phase (not during the execution phase), similar to `tf.while_loop()`. This allows TensorFlow to add the necessary operations to handle the conditional evaluation of the if-True or if-False tensors. However, in our case, the tensors `y` and `y_pred` are already created by the time we call `tf.cond()`, so unfortunately TensorFlow will consider both `y` and `y_pred` to be dependencies of the `reconstruction_targets` tensor. The `reconstruction_targets` tensor will end up with the correct value, but:1. whenever we evaluate a tensor that depends on `reconstruction_targets`, the `y_pred` tensor will be evaluated (even if `mask_with_layers` is `True`). This is not a big deal because computing `y_pred` adds no computing overhead during training, since we need it anyway to compute the margin loss. And during testing, if we are doing classification, we won't need reconstructions, so `reconstruction_targets` won't be evaluated at all.2. we will always need to feed a value for the `y` placeholder (even if `mask_with_layers` is `False`). This is a bit annoying, but we can pass an empty array, because TensorFlow won't use it anyway (it just does not know it yet when it checks for dependencies). Now that we have the reconstruction targets, let's create the reconstruction mask. It should be equal to 1.0 for the target class, and 0.0 for the other classes, for each instance. For this we can just use the `tf.one_hot()` function:
###Code
reconstruction_mask = tf.one_hot(reconstruction_targets,
depth=caps2_n_caps,
name="reconstruction_mask")
###Output
_____no_output_____
###Markdown
Let's check the shape of `reconstruction_mask`:
###Code
reconstruction_mask
###Output
_____no_output_____
###Markdown
Let's compare this to the shape of `caps2_output`:
###Code
caps2_output
###Output
_____no_output_____
###Markdown
Mmh, its shape is (_batch size_, 1, 10, 16, 1). We want to multiply it by the `reconstruction_mask`, but the shape of the `reconstruction_mask` is (_batch size_, 10). We must reshape it to (_batch size_, 1, 10, 1, 1) to make multiplication possible:
###Code
reconstruction_mask_reshaped = tf.reshape(
reconstruction_mask, [-1, 1, caps2_n_caps, 1, 1],
name="reconstruction_mask_reshaped")
###Output
_____no_output_____
###Markdown
At last! We can apply the mask:
###Code
caps2_output_masked = tf.multiply(
caps2_output, reconstruction_mask_reshaped,
name="caps2_output_masked")
caps2_output_masked
###Output
_____no_output_____
###Markdown
One last reshape operation to flatten the decoder's inputs:
###Code
decoder_input = tf.reshape(caps2_output_masked,
[-1, caps2_n_caps * caps2_n_dims],
name="decoder_input")
###Output
_____no_output_____
###Markdown
This gives us an array of shape (_batch size_, 160):
###Code
decoder_input
###Output
_____no_output_____
###Markdown
Decoder Now let's build the decoder. It's quite simple: two dense (fully connected) ReLU layers followed by a dense output sigmoid layer:
###Code
n_hidden1 = 512
n_hidden2 = 1024
n_output = 28 * 28
with tf.name_scope("decoder"):
hidden1 = tf.layers.dense(decoder_input, n_hidden1,
activation=tf.nn.relu,
name="hidden1")
hidden2 = tf.layers.dense(hidden1, n_hidden2,
activation=tf.nn.relu,
name="hidden2")
decoder_output = tf.layers.dense(hidden2, n_output,
activation=tf.nn.sigmoid,
name="decoder_output")
###Output
_____no_output_____
###Markdown
Reconstruction Loss Now let's compute the reconstruction loss. It is just the squared difference between the input image and the reconstructed image:
###Code
X_flat = tf.reshape(X, [-1, n_output], name="X_flat")
squared_difference = tf.square(X_flat - decoder_output,
name="squared_difference")
reconstruction_loss = tf.reduce_mean(squared_difference,
name="reconstruction_loss")
###Output
_____no_output_____
###Markdown
Final Loss The final loss is the sum of the margin loss and the reconstruction loss (scaled down by a factor of 0.0005 to ensure the margin loss dominates training):
###Code
alpha = 0.0005
loss = tf.add(margin_loss, alpha * reconstruction_loss, name="loss")
###Output
_____no_output_____
###Markdown
Final Touches Accuracy To measure our model's accuracy, we need to count the number of instances that are properly classified. For this, we can simply compare `y` and `y_pred`, convert the boolean value to a float32 (0.0 for False, 1.0 for True), and compute the mean over all the instances:
###Code
correct = tf.equal(y, y_pred, name="correct")
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy")
###Output
_____no_output_____
###Markdown
Training Operations The paper mentions that the authors used the Adam optimizer with TensorFlow's default parameters:
###Code
optimizer = tf.train.AdamOptimizer()
training_op = optimizer.minimize(loss, name="training_op")
###Output
_____no_output_____
###Markdown
Init and Saver And let's add the usual variable initializer, as well as a `Saver`:
###Code
init = tf.global_variables_initializer()
saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
And... we're done with the construction phase! Please take a moment to celebrate. :) Training Training our capsule network is pretty standard. For simplicity, we won't do any fancy hyperparameter tuning, dropout or anything, we will just run the training operation over and over again, displaying the loss, and at the end of each epoch, measure the accuracy on the validation set, display it, and save the model if the validation loss is the lowest seen found so far (this is a basic way to implement early stopping, without actually stopping). Hopefully the code should be self-explanatory, but here are a few details to note:* if a checkpoint file exists, it will be restored (this makes it possible to interrupt training, then restart it later from the last checkpoint),* we must not forget to feed `mask_with_labels=True` during training,* during testing, we let `mask_with_labels` default to `False` (but we still feed the labels since they are required to compute the accuracy),* the images loaded _via_ `mnist.train.next_batch()` are represented as `float32` arrays of shape \[784\], but the input placeholder `X` expects a `float32` array of shape \[28, 28, 1\], so we must reshape the images before we feed them to our model,* we evaluate the model's loss and accuracy on the full validation set (5,000 instances). To view progress and support systems that don't have a lot of RAM, the code evaluates the loss and accuracy on one batch at a time, and computes the mean loss and mean accuracy at the end.*Warning*: if you don't have a GPU, training will take a very long time (at least a few hours). With a GPU, it should take just a few minutes per epoch (e.g., 6 minutes on an NVidia GeForce GTX 1080Ti).
###Code
n_epochs = 10
batch_size = 50
restore_checkpoint = True
n_iterations_per_epoch = mnist.train.num_examples // batch_size
n_iterations_validation = mnist.validation.num_examples // batch_size
best_loss_val = np.infty
checkpoint_path = "./my_capsule_network"
with tf.Session() as sess:
if restore_checkpoint and tf.train.checkpoint_exists(checkpoint_path):
saver.restore(sess, checkpoint_path)
else:
init.run()
for epoch in range(n_epochs):
for iteration in range(1, n_iterations_per_epoch + 1):
X_batch, y_batch = mnist.train.next_batch(batch_size)
# Run the training operation and measure the loss:
_, loss_train = sess.run(
[training_op, loss],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch,
mask_with_labels: True})
print("\rIteration: {}/{} ({:.1f}%) Loss: {:.5f}".format(
iteration, n_iterations_per_epoch,
iteration * 100 / n_iterations_per_epoch,
loss_train),
end="")
# At the end of each epoch,
# measure the validation loss and accuracy:
loss_vals = []
acc_vals = []
for iteration in range(1, n_iterations_validation + 1):
X_batch, y_batch = mnist.validation.next_batch(batch_size)
loss_val, acc_val = sess.run(
[loss, accuracy],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch})
loss_vals.append(loss_val)
acc_vals.append(acc_val)
print("\rEvaluating the model: {}/{} ({:.1f}%)".format(
iteration, n_iterations_validation,
iteration * 100 / n_iterations_validation),
end=" " * 10)
loss_val = np.mean(loss_vals)
acc_val = np.mean(acc_vals)
print("\rEpoch: {} Val accuracy: {:.4f}% Loss: {:.6f}{}".format(
epoch + 1, acc_val * 100, loss_val,
" (improved)" if loss_val < best_loss_val else ""))
# And save the model if it improved:
if loss_val < best_loss_val:
save_path = saver.save(sess, checkpoint_path)
best_loss_val = loss_val
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
Epoch: 1 Val accuracy: 99.4400% Loss: 0.007998 (improved)
Epoch: 2 Val accuracy: 99.3400% Loss: 0.007959 (improved)
Epoch: 3 Val accuracy: 99.4000% Loss: 0.007436 (improved)
Epoch: 4 Val accuracy: 99.4000% Loss: 0.007568
Epoch: 5 Val accuracy: 99.2600% Loss: 0.007464
Epoch: 6 Val accuracy: 99.4800% Loss: 0.006631 (improved)
Epoch: 7 Val accuracy: 99.4000% Loss: 0.006915
Epoch: 8 Val accuracy: 99.4200% Loss: 0.006735
Epoch: 9 Val accuracy: 99.2200% Loss: 0.007709
Epoch: 10 Val accuracy: 99.4000% Loss: 0.007083
###Markdown
Training is finished, we reached over 99.4% accuracy on the validation set after just 5 epochs, things are looking good. Now let's evaluate the model on the test set. Evaluation
###Code
n_iterations_test = mnist.test.num_examples // batch_size
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
loss_tests = []
acc_tests = []
for iteration in range(1, n_iterations_test + 1):
X_batch, y_batch = mnist.test.next_batch(batch_size)
loss_test, acc_test = sess.run(
[loss, accuracy],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch})
loss_tests.append(loss_test)
acc_tests.append(acc_test)
print("\rEvaluating the model: {}/{} ({:.1f}%)".format(
iteration, n_iterations_test,
iteration * 100 / n_iterations_test),
end=" " * 10)
loss_test = np.mean(loss_tests)
acc_test = np.mean(acc_tests)
print("\rFinal test accuracy: {:.4f}% Loss: {:.6f}".format(
acc_test * 100, loss_test))
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
Final test accuracy: 99.5300% Loss: 0.006631
###Markdown
We reach 99.53% accuracy on the test set. Pretty nice. :) Predictions Now let's make some predictions! We first fix a few images from the test set, then we start a session, restore the trained model, evaluate `caps2_output` to get the capsule network's output vectors, `decoder_output` to get the reconstructions, and `y_pred` to get the class predictions:
###Code
n_samples = 5
sample_images = mnist.test.images[:n_samples].reshape([-1, 28, 28, 1])
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
caps2_output_value, decoder_output_value, y_pred_value = sess.run(
[caps2_output, decoder_output, y_pred],
feed_dict={X: sample_images,
y: np.array([], dtype=np.int64)})
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
###Markdown
Note: we feed `y` with an empty array, but TensorFlow will not use it, as explained earlier. And now let's plot the images and their labels, followed by the corresponding reconstructions and predictions:
###Code
sample_images = sample_images.reshape(-1, 28, 28)
reconstructions = decoder_output_value.reshape([-1, 28, 28])
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
plt.imshow(sample_images[index], cmap="binary")
plt.title("Label:" + str(mnist.test.labels[index]))
plt.axis("off")
plt.show()
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
plt.title("Predicted:" + str(y_pred_value[index]))
plt.imshow(reconstructions[index], cmap="binary")
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
The predictions are all correct, and the reconstructions look great. Hurray! Interpreting the Output Vectors Let's tweak the output vectors to see what their pose parameters represent. First, let's check the shape of the `cap2_output_value` NumPy array:
###Code
caps2_output_value.shape
###Output
_____no_output_____
###Markdown
Let's create a function that will tweak each of the 16 pose parameters (dimensions) in all output vectors. Each tweaked output vector will be identical to the original output vector, except that one of its pose parameters will be incremented by a value varying from -0.5 to 0.5. By default there will be 11 steps (-0.5, -0.4, ..., +0.4, +0.5). This function will return an array of shape (_tweaked pose parameters_=16, _steps_=11, _batch size_=5, 1, 10, 16, 1):
###Code
def tweak_pose_parameters(output_vectors, min=-0.5, max=0.5, n_steps=11):
steps = np.linspace(min, max, n_steps) # -0.25, -0.15, ..., +0.25
pose_parameters = np.arange(caps2_n_dims) # 0, 1, ..., 15
tweaks = np.zeros([caps2_n_dims, n_steps, 1, 1, 1, caps2_n_dims, 1])
tweaks[pose_parameters, :, 0, 0, 0, pose_parameters, 0] = steps
output_vectors_expanded = output_vectors[np.newaxis, np.newaxis]
return tweaks + output_vectors_expanded
###Output
_____no_output_____
###Markdown
Let's compute all the tweaked output vectors and reshape the result to (_parameters_×_steps_×_instances_, 1, 10, 16, 1) so we can feed the array to the decoder:
###Code
n_steps = 11
tweaked_vectors = tweak_pose_parameters(caps2_output_value, n_steps=n_steps)
tweaked_vectors_reshaped = tweaked_vectors.reshape(
[-1, 1, caps2_n_caps, caps2_n_dims, 1])
###Output
_____no_output_____
###Markdown
Now let's feed these tweaked output vectors to the decoder and get the reconstructions it produces:
###Code
tweak_labels = np.tile(mnist.test.labels[:n_samples], caps2_n_dims * n_steps)
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
decoder_output_value = sess.run(
decoder_output,
feed_dict={caps2_output: tweaked_vectors_reshaped,
mask_with_labels: True,
y: tweak_labels})
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
###Markdown
Let's reshape the decoder's output so we can easily iterate on the output dimension, the tweak steps, and the instances:
###Code
tweak_reconstructions = decoder_output_value.reshape(
[caps2_n_dims, n_steps, n_samples, 28, 28])
###Output
_____no_output_____
###Markdown
Lastly, let's plot all the reconstructions, for the first 3 output dimensions, for each tweaking step (column) and each digit (row):
###Code
for dim in range(3):
print("Tweaking output dimension #{}".format(dim))
plt.figure(figsize=(n_steps / 1.2, n_samples / 1.5))
for row in range(n_samples):
for col in range(n_steps):
plt.subplot(n_samples, n_steps, row * n_steps + col + 1)
plt.imshow(tweak_reconstructions[dim, col, row], cmap="binary")
plt.axis("off")
plt.show()
###Output
Tweaking output dimension #0
###Markdown
Capsule Networks (CapsNets) Based on the paper: [Dynamic Routing Between Capsules](https://arxiv.org/abs/1710.09829), by Sara Sabour, Nicholas Frosst and Geoffrey E. Hinton (NIPS 2017). Inspired in part from Huadong Liao's implementation: [CapsNet-TensorFlow](https://github.com/naturomics/CapsNet-Tensorflow). Introduction Watch [this video](https://youtu.be/pPN8d0E3900) to understand the key ideas behind Capsule Networks:
###Code
from IPython.display import HTML
HTML("""<iframe width="560" height="315" src="https://www.youtube.com/embed/pPN8d0E3900" frameborder="0" allowfullscreen></iframe>""")
###Output
_____no_output_____
###Markdown
You may also want to watch [this video](https://youtu.be/2Kawrd5szHE), which presents the main difficulties in this notebook:
###Code
HTML("""<iframe width="560" height="315" src="https://www.youtube.com/embed/2Kawrd5szHE" frameborder="0" allowfullscreen></iframe>""")
###Output
_____no_output_____
###Markdown
Imports To support both Python 2 and Python 3:
###Code
from __future__ import division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
To plot pretty figures:
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
We will need NumPy and TensorFlow:
###Code
import numpy as np
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Reproducibility Let's reset the default graph, in case you re-run this notebook without restarting the kernel:
###Code
tf.reset_default_graph()
###Output
_____no_output_____
###Markdown
Let's set the random seeds so that this notebook always produces the same output:
###Code
np.random.seed(42)
tf.set_random_seed(42)
###Output
_____no_output_____
###Markdown
Load MNIST Yes, I know, it's MNIST again. But hopefully this powerful idea will work as well on larger datasets, time will tell.
###Code
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
###Output
Extracting /tmp/data/train-images-idx3-ubyte.gz
Extracting /tmp/data/train-labels-idx1-ubyte.gz
Extracting /tmp/data/t10k-images-idx3-ubyte.gz
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
###Markdown
Let's look at what these hand-written digit images look like:
###Code
n_samples = 5
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
sample_image = mnist.train.images[index].reshape(28, 28)
plt.imshow(sample_image, cmap="binary")
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
And these are the corresponding labels:
###Code
mnist.train.labels[:n_samples]
###Output
_____no_output_____
###Markdown
Now let's build a Capsule Network to classify these images. Here's the overall architecture, enjoy the ASCII art! ;-)Note: for readability, I left out two arrows: Labels → Mask, and Input Images → Reconstruction Loss. ``` Loss ↑ ┌─────────┴─────────┐ Labels → Margin Loss Reconstruction Loss ↑ ↑ Length Decoder ↑ ↑ Digit Capsules ────Mask────┘ ↖↑↗ ↖↑↗ ↖↑↗ Primary Capsules ↑ Input Images``` We are going to build the graph starting from the bottom layer, and gradually move up, left side first. Let's go! Input Images Let's start by creating a placeholder for the input images (28×28 pixels, 1 color channel = grayscale).
###Code
X = tf.placeholder(shape=[None, 28, 28, 1], dtype=tf.float32, name="X")
###Output
_____no_output_____
###Markdown
Primary Capsules The first layer will be composed of 32 maps of 6×6 capsules each, where each capsule will output an 8D activation vector:
###Code
caps1_n_maps = 32
caps1_n_caps = caps1_n_maps * 6 * 6 # 1152 primary capsules
caps1_n_dims = 8
###Output
_____no_output_____
###Markdown
To compute their outputs, we first apply two regular convolutional layers:
###Code
conv1_params = {
"filters": 256,
"kernel_size": 9,
"strides": 1,
"padding": "valid",
"activation": tf.nn.relu,
}
conv2_params = {
"filters": caps1_n_maps * caps1_n_dims, # 256 convolutional filters
"kernel_size": 9,
"strides": 2,
"padding": "valid",
"activation": tf.nn.relu
}
conv1 = tf.layers.conv2d(X, name="conv1", **conv1_params)
conv2 = tf.layers.conv2d(conv1, name="conv2", **conv2_params)
###Output
_____no_output_____
###Markdown
Note: since we used a kernel size of 9 and no padding (for some reason, that's what `"valid"` means), the image shrunk by 9-1=8 pixels after each convolutional layer (28×28 to 20×20, then 20×20 to 12×12), and since we used a stride of 2 in the second convolutional layer, the image size was divided by 2. This is how we end up with 6×6 feature maps. Next, we reshape the output to get a bunch of 8D vectors representing the outputs of the primary capsules. The output of `conv2` is an array containing 32×8=256 feature maps for each instance, where each feature map is 6×6. So the shape of this output is (_batch size_, 6, 6, 256). We want to chop the 256 into 32 vectors of 8 dimensions each. We could do this by reshaping to (_batch size_, 6, 6, 32, 8). However, since this first capsule layer will be fully connected to the next capsule layer, we can simply flatten the 6×6 grids. This means we just need to reshape to (_batch size_, 6×6×32, 8).
###Code
caps1_raw = tf.reshape(conv2, [-1, caps1_n_caps, caps1_n_dims],
name="caps1_raw")
###Output
_____no_output_____
###Markdown
Now we need to squash these vectors. Let's define the `squash()` function, based on equation (1) from the paper:$\operatorname{squash}(\mathbf{s}) = \dfrac{\|\mathbf{s}\|^2}{1 + \|\mathbf{s}\|^2} \dfrac{\mathbf{s}}{\|\mathbf{s}\|}$The `squash()` function will squash all vectors in the given array, along the given axis (by default, the last axis).**Caution**, a nasty bug is waiting to bite you: the derivative of $\|\mathbf{s}\|$ is undefined when $\|\mathbf{s}\|=0$, so we can't just use `tf.norm()`, or else it will blow up during training: if a vector is zero, the gradients will be `nan`, so when the optimizer updates the variables, they will also become `nan`, and from then on you will be stuck in `nan` land. The solution is to implement the norm manually by computing the square root of the sum of squares plus a tiny epsilon value: $\|\mathbf{s}\| \approx \sqrt{\sum\limits_i{{s_i}^2}\,\,+ \epsilon}$.
###Code
def squash(s, axis=-1, epsilon=1e-7, name=None):
with tf.name_scope(name, default_name="squash"):
squared_norm = tf.reduce_sum(tf.square(s), axis=axis,
keep_dims=True)
safe_norm = tf.sqrt(squared_norm + epsilon)
squash_factor = squared_norm / (1. + squared_norm)
unit_vector = s / safe_norm
return squash_factor * unit_vector
###Output
_____no_output_____
###Markdown
Now let's apply this function to get the output $\mathbf{u}_i$ of each primary capsules $i$ :
###Code
caps1_output = squash(caps1_raw, name="caps1_output")
###Output
_____no_output_____
###Markdown
Great! We have the output of the first capsule layer. It wasn't too hard, was it? However, computing the next layer is where the fun really begins. Digit Capsules To compute the output of the digit capsules, we must first compute the predicted output vectors (one for each primary / digit capsule pair). Then we can run the routing by agreement algorithm. Compute the Predicted Output Vectors The digit capsule layer contains 10 capsules (one for each digit) of 16 dimensions each:
###Code
caps2_n_caps = 10
caps2_n_dims = 16
###Output
_____no_output_____
###Markdown
For each capsule $i$ in the first layer, we want to predict the output of every capsule $j$ in the second layer. For this, we will need a transformation matrix $\mathbf{W}_{i,j}$ (one for each pair of capsules ($i$, $j$)), then we can compute the predicted output $\hat{\mathbf{u}}_{j|i} = \mathbf{W}_{i,j} \, \mathbf{u}_i$ (equation (2)-right in the paper). Since we want to transform an 8D vector into a 16D vector, each transformation matrix $\mathbf{W}_{i,j}$ must have a shape of (16, 8). To compute $\hat{\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$), we will use a nice feature of the `tf.matmul()` function: you probably know that it lets you multiply two matrices, but you may not know that it also lets you multiply higher dimensional arrays. It treats the arrays as arrays of matrices, and it performs itemwise matrix multiplication. For example, suppose you have two 4D arrays, each containing a 2×3 grid of matrices. The first contains matrices $\mathbf{A}, \mathbf{B}, \mathbf{C}, \mathbf{D}, \mathbf{E}, \mathbf{F}$ and the second contains matrices $\mathbf{G}, \mathbf{H}, \mathbf{I}, \mathbf{J}, \mathbf{K}, \mathbf{L}$. If you multiply these two 4D arrays using the `tf.matmul()` function, this is what you get:$\pmatrix{\mathbf{A} & \mathbf{B} & \mathbf{C} \\\mathbf{D} & \mathbf{E} & \mathbf{F}} \times\pmatrix{\mathbf{G} & \mathbf{H} & \mathbf{I} \\\mathbf{J} & \mathbf{K} & \mathbf{L}} = \pmatrix{\mathbf{AG} & \mathbf{BH} & \mathbf{CI} \\\mathbf{DJ} & \mathbf{EK} & \mathbf{FL}}$ We can apply this function to compute $\hat{\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$) like this (recall that there are 6×6×32=1152 capsules in the first layer, and 10 in the second layer):$\pmatrix{ \mathbf{W}_{1,1} & \mathbf{W}_{1,2} & \cdots & \mathbf{W}_{1,10} \\ \mathbf{W}_{2,1} & \mathbf{W}_{2,2} & \cdots & \mathbf{W}_{2,10} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{W}_{1152,1} & \mathbf{W}_{1152,2} & \cdots & \mathbf{W}_{1152,10}} \times\pmatrix{ \mathbf{u}_1 & \mathbf{u}_1 & \cdots & \mathbf{u}_1 \\ \mathbf{u}_2 & \mathbf{u}_2 & \cdots & \mathbf{u}_2 \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{u}_{1152} & \mathbf{u}_{1152} & \cdots & \mathbf{u}_{1152}}=\pmatrix{\hat{\mathbf{u}}_{1|1} & \hat{\mathbf{u}}_{2|1} & \cdots & \hat{\mathbf{u}}_{10|1} \\\hat{\mathbf{u}}_{1|2} & \hat{\mathbf{u}}_{2|2} & \cdots & \hat{\mathbf{u}}_{10|2} \\\vdots & \vdots & \ddots & \vdots \\\hat{\mathbf{u}}_{1|1152} & \hat{\mathbf{u}}_{2|1152} & \cdots & \hat{\mathbf{u}}_{10|1152}}$ The shape of the first array is (1152, 10, 16, 8), and the shape of the second array is (1152, 10, 8, 1). Note that the second array must contain 10 identical copies of the vectors $\mathbf{u}_1$ to $\mathbf{u}_{1152}$. To create this array, we will use the handy `tf.tile()` function, which lets you create an array containing many copies of a base array, tiled in any way you want. Oh, wait a second! We forgot one dimension: _batch size_. Say we feed 50 images to the capsule network, it will make predictions for these 50 images simultaneously. So the shape of the first array must be (50, 1152, 10, 16, 8), and the shape of the second array must be (50, 1152, 10, 8, 1). The first layer capsules actually already output predictions for all 50 images, so the second array will be fine, but for the first array, we will need to use `tf.tile()` to have 50 copies of the transformation matrices. Okay, let's start by creating a trainable variable of shape (1, 1152, 10, 16, 8) that will hold all the transformation matrices. The first dimension of size 1 will make this array easy to tile. We initialize this variable randomly using a normal distribution with a standard deviation to 0.01.
###Code
init_sigma = 0.01
W_init = tf.random_normal(
shape=(1, caps1_n_caps, caps2_n_caps, caps2_n_dims, caps1_n_dims),
stddev=init_sigma, dtype=tf.float32, name="W_init")
W = tf.Variable(W_init, name="W")
###Output
_____no_output_____
###Markdown
Now we can create the first array by repeating `W` once per instance:
###Code
batch_size = tf.shape(X)[0]
W_tiled = tf.tile(W, [batch_size, 1, 1, 1, 1], name="W_tiled")
###Output
_____no_output_____
###Markdown
That's it! On to the second array, now. As discussed earlier, we need to create an array of shape (_batch size_, 1152, 10, 8, 1), containing the output of the first layer capsules, repeated 10 times (once per digit, along the third dimension, which is axis=2). The `caps1_output` array has a shape of (_batch size_, 1152, 8), so we first need to expand it twice, to get an array of shape (_batch size_, 1152, 1, 8, 1), then we can repeat it 10 times along the third dimension:
###Code
caps1_output_expanded = tf.expand_dims(caps1_output, -1,
name="caps1_output_expanded")
caps1_output_tile = tf.expand_dims(caps1_output_expanded, 2,
name="caps1_output_tile")
caps1_output_tiled = tf.tile(caps1_output_tile, [1, 1, caps2_n_caps, 1, 1],
name="caps1_output_tiled")
###Output
_____no_output_____
###Markdown
Let's check the shape of the first array:
###Code
W_tiled
###Output
_____no_output_____
###Markdown
Good, and now the second:
###Code
caps1_output_tiled
###Output
_____no_output_____
###Markdown
Yes! Now, to get all the predicted output vectors $\hat{\mathbf{u}}_{j|i}$, we just need to multiply these two arrays using `tf.matmul()`, as explained earlier:
###Code
caps2_predicted = tf.matmul(W_tiled, caps1_output_tiled,
name="caps2_predicted")
###Output
_____no_output_____
###Markdown
Let's check the shape:
###Code
caps2_predicted
###Output
_____no_output_____
###Markdown
Perfect, for each instance in the batch (we don't know the batch size yet, hence the "?") and for each pair of first and second layer capsules (1152×10) we have a 16D predicted output column vector (16×1). We're ready to apply the routing by agreement algorithm! Routing by agreement First let's initialize the raw routing weights $b_{i,j}$ to zero:
###Code
raw_weights = tf.zeros([batch_size, caps1_n_caps, caps2_n_caps, 1, 1],
dtype=np.float32, name="raw_weights")
###Output
_____no_output_____
###Markdown
We will see why we need the last two dimensions of size 1 in a minute. Round 1 First, let's apply the softmax function to compute the routing weights, $\mathbf{c}_{i} = \operatorname{softmax}(\mathbf{b}_i)$ (equation (3) in the paper):
###Code
routing_weights = tf.nn.softmax(raw_weights, dim=2, name="routing_weights")
###Output
_____no_output_____
###Markdown
Now let's compute the weighted sum of all the predicted output vectors for each second-layer capsule, $\mathbf{s}_j = \sum\limits_{i}{c_{i,j}\hat{\mathbf{u}}_{j|i}}$ (equation (2)-left in the paper):
###Code
weighted_predictions = tf.multiply(routing_weights, caps2_predicted,
name="weighted_predictions")
weighted_sum = tf.reduce_sum(weighted_predictions, axis=1, keep_dims=True,
name="weighted_sum")
###Output
_____no_output_____
###Markdown
There are a couple important details to note here:* To perform elementwise matrix multiplication (also called the Hadamard product, noted $\circ$), we use the `tf.multiply()` function. It requires `routing_weights` and `caps2_predicted` to have the same rank, which is why we added two extra dimensions of size 1 to `routing_weights`, earlier.* The shape of `routing_weights` is (_batch size_, 1152, 10, 1, 1) while the shape of `caps2_predicted` is (_batch size_, 1152, 10, 16, 1). Since they don't match on the fourth dimension (1 _vs_ 16), `tf.multiply()` automatically _broadcasts_ the `routing_weights` 16 times along that dimension. If you are not familiar with broadcasting, a simple example might help: $ \pmatrix{1 & 2 & 3 \\ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000} = \pmatrix{1 & 2 & 3 \\ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000 \\ 10 & 100 & 1000} = \pmatrix{10 & 200 & 3000 \\ 40 & 500 & 6000} $ And finally, let's apply the squash function to get the outputs of the second layer capsules at the end of the first iteration of the routing by agreement algorithm, $\mathbf{v}_j = \operatorname{squash}(\mathbf{s}_j)$ :
###Code
caps2_output_round_1 = squash(weighted_sum, axis=-2,
name="caps2_output_round_1")
caps2_output_round_1
###Output
_____no_output_____
###Markdown
Good! We have ten 16D output vectors for each instance, as expected. Round 2 First, let's measure how close each predicted vector $\hat{\mathbf{u}}_{j|i}$ is to the actual output vector $\mathbf{v}_j$ by computing their scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$. * Quick math reminder: if $\vec{a}$ and $\vec{b}$ are two vectors of equal length, and $\mathbf{a}$ and $\mathbf{b}$ are their corresponding column vectors (i.e., matrices with a single column), then $\mathbf{a}^T \mathbf{b}$ (i.e., the matrix multiplication of the transpose of $\mathbf{a}$, and $\mathbf{b}$) is a 1×1 matrix containing the scalar product of the two vectors $\vec{a}\cdot\vec{b}$. In Machine Learning, we generally represent vectors as column vectors, so when we talk about computing the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$, this actually means computing ${\hat{\mathbf{u}}_{j|i}}^T \mathbf{v}_j$. Since we need to compute the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ for each instance, and for each pair of first and second level capsules $(i, j)$, we will once again take advantage of the fact that `tf.matmul()` can multiply many matrices simultaneously. This will require playing around with `tf.tile()` to get all dimensions to match (except for the last 2), just like we did earlier. So let's look at the shape of `caps2_predicted`, which holds all the predicted output vectors $\hat{\mathbf{u}}_{j|i}$ for each instance and each pair of capsules:
###Code
caps2_predicted
###Output
_____no_output_____
###Markdown
And now let's look at the shape of `caps2_output_round_1`, which holds 10 outputs vectors of 16D each, for each instance:
###Code
caps2_output_round_1
###Output
_____no_output_____
###Markdown
To get these shapes to match, we just need to tile the `caps2_output_round_1` array 1152 times (once per primary capsule) along the second dimension:
###Code
caps2_output_round_1_tiled = tf.tile(
caps2_output_round_1, [1, caps1_n_caps, 1, 1, 1],
name="caps2_output_round_1_tiled")
###Output
_____no_output_____
###Markdown
And now we are ready to call `tf.matmul()` (note that we must tell it to transpose the matrices in the first array, to get ${\hat{\mathbf{u}}_{j|i}}^T$ instead of $\hat{\mathbf{u}}_{j|i}$):
###Code
agreement = tf.matmul(caps2_predicted, caps2_output_round_1_tiled,
transpose_a=True, name="agreement")
###Output
_____no_output_____
###Markdown
We can now update the raw routing weights $b_{i,j}$ by simply adding the scalar product $\hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ we just computed: $b_{i,j} \gets b_{i,j} + \hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ (see Procedure 1, step 7, in the paper).
###Code
raw_weights_round_2 = tf.add(raw_weights, agreement,
name="raw_weights_round_2")
###Output
_____no_output_____
###Markdown
The rest of round 2 is the same as in round 1:
###Code
routing_weights_round_2 = tf.nn.softmax(raw_weights_round_2,
dim=2,
name="routing_weights_round_2")
weighted_predictions_round_2 = tf.multiply(routing_weights_round_2,
caps2_predicted,
name="weighted_predictions_round_2")
weighted_sum_round_2 = tf.reduce_sum(weighted_predictions_round_2,
axis=1, keep_dims=True,
name="weighted_sum_round_2")
caps2_output_round_2 = squash(weighted_sum_round_2,
axis=-2,
name="caps2_output_round_2")
###Output
_____no_output_____
###Markdown
We could go on for a few more rounds, by repeating exactly the same steps as in round 2, but to keep things short, we will stop here:
###Code
caps2_output = caps2_output_round_2
###Output
_____no_output_____
###Markdown
Static or Dynamic Loop? In the code above, we created different operations in the TensorFlow graph for each round of the routing by agreement algorithm. In other words, it's a static loop.Sure, instead of copy/pasting the code several times, we could have written a `for` loop in Python, but this would not change the fact that the graph would end up containing different operations for each routing iteration. It's actually okay since we generally want less than 5 routing iterations, so the graph won't grow too big.However, you may prefer to implement the routing loop within the TensorFlow graph itself rather than using a Python `for` loop. To do this, you would need to use TensorFlow's `tf.while_loop()` function. This way, all routing iterations would reuse the same operations in the graph, it would be a dynamic loop.For example, here is how to build a small loop that computes the sum of squares from 1 to 100:
###Code
def condition(input, counter):
return tf.less(counter, 100)
def loop_body(input, counter):
output = tf.add(input, tf.square(counter))
return output, tf.add(counter, 1)
with tf.name_scope("compute_sum_of_squares"):
counter = tf.constant(1)
sum_of_squares = tf.constant(0)
result = tf.while_loop(condition, loop_body, [sum_of_squares, counter])
with tf.Session() as sess:
print(sess.run(result))
###Output
(328350, 100)
###Markdown
As you can see, the `tf.while_loop()` function expects the loop condition and body to be provided _via_ two functions. These functions will be called only once by TensorFlow, during the graph construction phase, _not_ while executing the graph. The `tf.while_loop()` function stitches together the graph fragments created by `condition()` and `loop_body()` with some additional operations to create the loop.Also note that during training, TensorFlow will automagically handle backpropogation through the loop, so you don't need to worry about that. Of course, we could have used this one-liner instead! ;-)
###Code
sum([i**2 for i in range(1, 100 + 1)])
###Output
_____no_output_____
###Markdown
Joke aside, apart from reducing the graph size, using a dynamic loop instead of a static loop can help reduce how much GPU RAM you use (if you are using a GPU). Indeed, if you set `swap_memory=True` when calling the `tf.while_loop()` function, TensorFlow will automatically check GPU RAM usage at each loop iteration, and it will take care of swapping memory between the GPU and the CPU when needed. Since CPU memory is much cheaper and abundant than GPU RAM, this can really make a big difference. Estimated Class Probabilities (Length) The lengths of the output vectors represent the class probabilities, so we could just use `tf.norm()` to compute them, but as we saw when discussing the squash function, it would be risky, so instead let's create our own `safe_norm()` function:
###Code
def safe_norm(s, axis=-1, epsilon=1e-7, keep_dims=False, name=None):
with tf.name_scope(name, default_name="safe_norm"):
squared_norm = tf.reduce_sum(tf.square(s), axis=axis,
keep_dims=keep_dims)
return tf.sqrt(squared_norm + epsilon)
y_proba = safe_norm(caps2_output, axis=-2, name="y_proba")
###Output
_____no_output_____
###Markdown
To predict the class of each instance, we can just select the one with the highest estimated probability. To do this, let's start by finding its index using `tf.argmax()`:
###Code
y_proba_argmax = tf.argmax(y_proba, axis=2, name="y_proba")
###Output
_____no_output_____
###Markdown
Let's look at the shape of `y_proba_argmax`:
###Code
y_proba_argmax
###Output
_____no_output_____
###Markdown
That's what we wanted: for each instance, we now have the index of the longest output vector. Let's get rid of the last two dimensions by using `tf.squeeze()` which removes dimensions of size 1. This gives us the capsule network's predicted class for each instance:
###Code
y_pred = tf.squeeze(y_proba_argmax, axis=[1,2], name="y_pred")
y_pred
###Output
_____no_output_____
###Markdown
Okay, we are now ready to define the training operations, starting with the losses. Labels First, we will need a placeholder for the labels:
###Code
y = tf.placeholder(shape=[None], dtype=tf.int64, name="y")
###Output
_____no_output_____
###Markdown
Margin loss The paper uses a special margin loss to make it possible to detect two or more different digits in each image:$ L_k = T_k \max(0, m^{+} - \|\mathbf{v}_k\|)^2 - \lambda (1 - T_k) \max(0, \|\mathbf{v}_k\| - m^{-})^2$* $T_k$ is equal to 1 if the digit of class $k$ is present, or 0 otherwise.* In the paper, $m^{+} = 0.9$, $m^{-} = 0.1$ and $\lambda = 0.5$.* Note that there was an error in the video (at 15:47): the max operations are squared, not the norms. Sorry about that.
###Code
m_plus = 0.9
m_minus = 0.1
lambda_ = 0.5
###Output
_____no_output_____
###Markdown
Since `y` will contain the digit classes, from 0 to 9, to get $T_k$ for every instance and every class, we can just use the `tf.one_hot()` function:
###Code
T = tf.one_hot(y, depth=caps2_n_caps, name="T")
###Output
_____no_output_____
###Markdown
A small example should make it clear what this does:
###Code
with tf.Session():
print(T.eval(feed_dict={y: np.array([0, 1, 2, 3, 9])}))
###Output
[[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]
###Markdown
Now let's compute the norm of the output vector for each output capsule and each instance. First, let's verify the shape of `caps2_output`:
###Code
caps2_output
###Output
_____no_output_____
###Markdown
The 16D output vectors are in the second to last dimension, so let's use the `safe_norm()` function with `axis=-2`:
###Code
caps2_output_norm = safe_norm(caps2_output, axis=-2, keep_dims=True,
name="caps2_output_norm")
###Output
_____no_output_____
###Markdown
Now let's compute $\max(0, m^{+} - \|\mathbf{v}_k\|)^2$, and reshape the result to get a simple matrix of shape (_batch size_, 10):
###Code
present_error_raw = tf.square(tf.maximum(0., m_plus - caps2_output_norm),
name="present_error_raw")
present_error = tf.reshape(present_error_raw, shape=(-1, 10),
name="present_error")
###Output
_____no_output_____
###Markdown
Next let's compute $\max(0, \|\mathbf{v}_k\| - m^{-})^2$ and reshape it:
###Code
absent_error_raw = tf.square(tf.maximum(0., caps2_output_norm - m_minus),
name="absent_error_raw")
absent_error = tf.reshape(absent_error_raw, shape=(-1, 10),
name="absent_error")
###Output
_____no_output_____
###Markdown
We are ready to compute the loss for each instance and each digit:
###Code
L = tf.add(T * present_error, lambda_ * (1.0 - T) * absent_error,
name="L")
###Output
_____no_output_____
###Markdown
Now we can sum the digit losses for each instance ($L_0 + L_1 + \cdots + L_9$), and compute the mean over all instances. This gives us the final margin loss:
###Code
margin_loss = tf.reduce_mean(tf.reduce_sum(L, axis=1), name="margin_loss")
###Output
_____no_output_____
###Markdown
Reconstruction Now let's add a decoder network on top of the capsule network. It is a regular 3-layer fully connected neural network which will learn to reconstruct the input images based on the output of the capsule network. This will force the capsule network to preserve all the information required to reconstruct the digits, across the whole network. This constraint regularizes the model: it reduces the risk of overfitting the training set, and it helps generalize to new digits. Mask The paper mentions that during training, instead of sending all the outputs of the capsule network to the decoder network, we must send only the output vector of the capsule that corresponds to the target digit. All the other output vectors must be masked out. At inference time, we must mask all output vectors except for the longest one, i.e., the one that corresponds to the predicted digit. You can see this in the paper's figure 2 (at 18:15 in the video): all output vectors are masked out, except for the reconstruction target's output vector. We need a placeholder to tell TensorFlow whether we want to mask the output vectors based on the labels (`True`) or on the predictions (`False`, the default):
###Code
mask_with_labels = tf.placeholder_with_default(False, shape=(),
name="mask_with_labels")
###Output
_____no_output_____
###Markdown
Now let's use `tf.cond()` to define the reconstruction targets as the labels `y` if `mask_with_labels` is `True`, or `y_pred` otherwise.
###Code
reconstruction_targets = tf.cond(mask_with_labels, # condition
lambda: y, # if True
lambda: y_pred, # if False
name="reconstruction_targets")
###Output
_____no_output_____
###Markdown
Note that the `tf.cond()` function expects the if-True and if-False tensors to be passed _via_ functions: these functions will be called just once during the graph construction phase (not during the execution phase), similar to `tf.while_loop()`. This allows TensorFlow to add the necessary operations to handle the conditional evaluation of the if-True or if-False tensors. However, in our case, the tensors `y` and `y_pred` are already created by the time we call `tf.cond()`, so unfortunately TensorFlow will consider both `y` and `y_pred` to be dependencies of the `reconstruction_targets` tensor. The `reconstruction_targets` tensor will end up with the correct value, but:1. whenever we evaluate a tensor that depends on `reconstruction_targets`, the `y_pred` tensor will be evaluated (even if `mask_with_layers` is `True`). This is not a big deal because computing `y_pred` adds no computing overhead during training, since we need it anyway to compute the margin loss. And during testing, if we are doing classification, we won't need reconstructions, so `reconstruction_targets` won't be evaluated at all.2. we will always need to feed a value for the `y` placeholder (even if `mask_with_layers` is `False`). This is a bit annoying, but we can pass an empty array, because TensorFlow won't use it anyway (it just does not know it yet when it checks for dependencies). Now that we have the reconstruction targets, let's create the reconstruction mask. It should be equal to 1.0 for the target class, and 0.0 for the other classes, for each instance. For this we can just use the `tf.one_hot()` function:
###Code
reconstruction_mask = tf.one_hot(reconstruction_targets,
depth=caps2_n_caps,
name="reconstruction_mask")
###Output
_____no_output_____
###Markdown
Let's check the shape of `reconstruction_mask`:
###Code
reconstruction_mask
###Output
_____no_output_____
###Markdown
Let's compare this to the shape of `caps2_output`:
###Code
caps2_output
###Output
_____no_output_____
###Markdown
Mmh, its shape is (_batch size_, 1, 10, 16, 1). We want to multiply it by the `reconstruction_mask`, but the shape of the `reconstruction_mask` is (_batch size_, 10). We must reshape it to (_batch size_, 1, 10, 1, 1) to make multiplication possible:
###Code
reconstruction_mask_reshaped = tf.reshape(
reconstruction_mask, [-1, 1, caps2_n_caps, 1, 1],
name="reconstruction_mask_reshaped")
###Output
_____no_output_____
###Markdown
At last! We can apply the mask:
###Code
caps2_output_masked = tf.multiply(
caps2_output, reconstruction_mask_reshaped,
name="caps2_output_masked")
caps2_output_masked
###Output
_____no_output_____
###Markdown
One last reshape operation to flatten the decoder's inputs:
###Code
decoder_input = tf.reshape(caps2_output_masked,
[-1, caps2_n_caps * caps2_n_dims],
name="decoder_input")
###Output
_____no_output_____
###Markdown
This gives us an array of shape (_batch size_, 160):
###Code
decoder_input
###Output
_____no_output_____
###Markdown
Decoder Now let's build the decoder. It's quite simple: two dense (fully connected) ReLU layers followed by a dense output sigmoid layer:
###Code
n_hidden1 = 512
n_hidden2 = 1024
n_output = 28 * 28
with tf.name_scope("decoder"):
hidden1 = tf.layers.dense(decoder_input, n_hidden1,
activation=tf.nn.relu,
name="hidden1")
hidden2 = tf.layers.dense(hidden1, n_hidden2,
activation=tf.nn.relu,
name="hidden2")
decoder_output = tf.layers.dense(hidden2, n_output,
activation=tf.nn.sigmoid,
name="decoder_output")
###Output
_____no_output_____
###Markdown
Reconstruction Loss Now let's compute the reconstruction loss. It is just the squared difference between the input image and the reconstructed image:
###Code
X_flat = tf.reshape(X, [-1, n_output], name="X_flat")
squared_difference = tf.square(X_flat - decoder_output,
name="squared_difference")
reconstruction_loss = tf.reduce_sum(squared_difference,
name="reconstruction_loss")
###Output
_____no_output_____
###Markdown
Final Loss The final loss is the sum of the margin loss and the reconstruction loss (scaled down by a factor of 0.0005 to ensure the margin loss dominates training):
###Code
alpha = 0.0005
loss = tf.add(margin_loss, alpha * reconstruction_loss, name="loss")
###Output
_____no_output_____
###Markdown
Final Touches Accuracy To measure our model's accuracy, we need to count the number of instances that are properly classified. For this, we can simply compare `y` and `y_pred`, convert the boolean value to a float32 (0.0 for False, 1.0 for True), and compute the mean over all the instances:
###Code
correct = tf.equal(y, y_pred, name="correct")
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy")
###Output
_____no_output_____
###Markdown
Training Operations The paper mentions that the authors used the Adam optimizer with TensorFlow's default parameters:
###Code
optimizer = tf.train.AdamOptimizer()
training_op = optimizer.minimize(loss, name="training_op")
###Output
_____no_output_____
###Markdown
Init and Saver And let's add the usual variable initializer, as well as a `Saver`:
###Code
init = tf.global_variables_initializer()
saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
And... we're done with the construction phase! Please take a moment to celebrate. :) Training Training our capsule network is pretty standard. For simplicity, we won't do any fancy hyperparameter tuning, dropout or anything, we will just run the training operation over and over again, displaying the loss, and at the end of each epoch, measure the accuracy on the validation set, display it, and save the model if the validation loss is the lowest seen found so far (this is a basic way to implement early stopping, without actually stopping). Hopefully the code should be self-explanatory, but here are a few details to note:* if a checkpoint file exists, it will be restored (this makes it possible to interrupt training, then restart it later from the last checkpoint),* we must not forget to feed `mask_with_labels=True` during training,* during testing, we let `mask_with_labels` default to `False` (but we still feed the labels since they are required to compute the accuracy),* the images loaded _via_ `mnist.train.next_batch()` are represented as `float32` arrays of shape \[784\], but the input placeholder `X` expects a `float32` array of shape \[28, 28, 1\], so we must reshape the images before we feed them to our model,* we evaluate the model's loss and accuracy on the full validation set (5,000 instances). To view progress and support systems that don't have a lot of RAM, the code evaluates the loss and accuracy on one batch at a time, and computes the mean loss and mean accuracy at the end.*Warning*: if you don't have a GPU, training will take a very long time (at least a few hours). With a GPU, it should take just a few minutes per epoch (e.g., 6 minutes on an NVidia GeForce GTX 1080Ti).
###Code
n_epochs = 10
batch_size = 50
restore_checkpoint = True
n_iterations_per_epoch = mnist.train.num_examples // batch_size
n_iterations_validation = mnist.validation.num_examples // batch_size
best_loss_val = np.infty
checkpoint_path = "./my_capsule_network"
with tf.Session() as sess:
if restore_checkpoint and tf.train.checkpoint_exists(checkpoint_path):
saver.restore(sess, checkpoint_path)
else:
init.run()
for epoch in range(n_epochs):
for iteration in range(1, n_iterations_per_epoch + 1):
X_batch, y_batch = mnist.train.next_batch(batch_size)
# Run the training operation and measure the loss:
_, loss_train = sess.run(
[training_op, loss],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch,
mask_with_labels: True})
print("\rIteration: {}/{} ({:.1f}%) Loss: {:.5f}".format(
iteration, n_iterations_per_epoch,
iteration * 100 / n_iterations_per_epoch,
loss_train),
end="")
# At the end of each epoch,
# measure the validation loss and accuracy:
loss_vals = []
acc_vals = []
for iteration in range(1, n_iterations_validation + 1):
X_batch, y_batch = mnist.validation.next_batch(batch_size)
loss_val, acc_val = sess.run(
[loss, accuracy],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch})
loss_vals.append(loss_val)
acc_vals.append(acc_val)
print("\rEvaluating the model: {}/{} ({:.1f}%)".format(
iteration, n_iterations_validation,
iteration * 100 / n_iterations_validation),
end=" " * 10)
loss_val = np.mean(loss_vals)
acc_val = np.mean(acc_vals)
print("\rEpoch: {} Val accuracy: {:.4f}% Loss: {:.6f}{}".format(
epoch + 1, acc_val * 100, loss_val,
" (improved)" if loss_val < best_loss_val else ""))
# And save the model if it improved:
if loss_val < best_loss_val:
save_path = saver.save(sess, checkpoint_path)
best_loss_val = loss_val
###Output
Epoch: 1 Val accuracy: 98.7000% Loss: 0.416563 (improved)
Epoch: 2 Val accuracy: 99.0400% Loss: 0.291740 (improved)
Epoch: 3 Val accuracy: 99.1200% Loss: 0.241666 (improved)
Epoch: 4 Val accuracy: 99.2800% Loss: 0.211442 (improved)
Epoch: 5 Val accuracy: 99.3200% Loss: 0.196026 (improved)
Epoch: 6 Val accuracy: 99.3600% Loss: 0.186166 (improved)
Epoch: 7 Val accuracy: 99.3400% Loss: 0.179290 (improved)
Epoch: 8 Val accuracy: 99.3800% Loss: 0.173593 (improved)
Epoch: 9 Val accuracy: 99.3600% Loss: 0.169071 (improved)
Epoch: 10 Val accuracy: 99.3400% Loss: 0.165477 (improved)
###Markdown
Training is finished, we reached over 99.3% accuracy on the validation set after just 5 epochs, things are looking good. Now let's evaluate the model on the test set. Evaluation
###Code
n_iterations_test = mnist.test.num_examples // batch_size
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
loss_tests = []
acc_tests = []
for iteration in range(1, n_iterations_test + 1):
X_batch, y_batch = mnist.test.next_batch(batch_size)
loss_test, acc_test = sess.run(
[loss, accuracy],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch})
loss_tests.append(loss_test)
acc_tests.append(acc_test)
print("\rEvaluating the model: {}/{} ({:.1f}%)".format(
iteration, n_iterations_test,
iteration * 100 / n_iterations_test),
end=" " * 10)
loss_test = np.mean(loss_tests)
acc_test = np.mean(acc_tests)
print("\rFinal test accuracy: {:.4f}% Loss: {:.6f}".format(
acc_test * 100, loss_test))
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
Final test accuracy: 99.4300% Loss: 0.165047
###Markdown
We reach 99.43% accuracy on the test set. Pretty nice. :) Predictions Now let's make some predictions! We first fix a few images from the test set, then we start a session, restore the trained model, evaluate `caps2_output` to get the capsule network's output vectors, `decoder_output` to get the reconstructions, and `y_pred` to get the class predictions:
###Code
n_samples = 5
sample_images = mnist.test.images[:n_samples].reshape([-1, 28, 28, 1])
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
caps2_output_value, decoder_output_value, y_pred_value = sess.run(
[caps2_output, decoder_output, y_pred],
feed_dict={X: sample_images,
y: np.array([], dtype=np.int64)})
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
###Markdown
Note: we feed `y` with an empty array, but TensorFlow will not use it, as explained earlier. And now let's plot the images and their labels, followed by the corresponding reconstructions and predictions:
###Code
sample_images = sample_images.reshape(-1, 28, 28)
reconstructions = decoder_output_value.reshape([-1, 28, 28])
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
plt.imshow(sample_images[index], cmap="binary")
plt.title("Label:" + str(mnist.test.labels[index]))
plt.axis("off")
plt.show()
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
plt.title("Predicted:" + str(y_pred_value[index]))
plt.imshow(reconstructions[index], cmap="binary")
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
The predictions are all correct, and the reconstructions look great. Hurray! Interpreting the Output Vectors Let's tweak the output vectors to see what their pose parameters represent. First, let's check the shape of the `cap2_output_value` NumPy array:
###Code
caps2_output_value.shape
###Output
_____no_output_____
###Markdown
Let's create a function that will tweak each of the 16 pose parameters (dimensions) in all output vectors. Each tweaked output vector will be identical to the original output vector, except that one of its pose parameters will be incremented by a value varying from -0.5 to 0.5. By default there will be 11 steps (-0.5, -0.4, ..., +0.4, +0.5). This function will return an array of shape (_tweaked pose parameters_=16, _steps_=11, _batch size_=5, 1, 10, 16, 1):
###Code
def tweak_pose_parameters(output_vectors, min=-0.5, max=0.5, n_steps=11):
steps = np.linspace(min, max, n_steps) # -0.25, -0.15, ..., +0.25
pose_parameters = np.arange(caps2_n_dims) # 0, 1, ..., 15
tweaks = np.zeros([caps2_n_dims, n_steps, 1, 1, 1, caps2_n_dims, 1])
tweaks[pose_parameters, :, 0, 0, 0, pose_parameters, 0] = steps
output_vectors_expanded = output_vectors[np.newaxis, np.newaxis]
return tweaks + output_vectors_expanded
###Output
_____no_output_____
###Markdown
Let's compute all the tweaked output vectors and reshape the result to (_parameters_×_steps_×_instances_, 1, 10, 16, 1) so we can feed the array to the decoder:
###Code
n_steps = 11
tweaked_vectors = tweak_pose_parameters(caps2_output_value, n_steps=n_steps)
tweaked_vectors_reshaped = tweaked_vectors.reshape(
[-1, 1, caps2_n_caps, caps2_n_dims, 1])
###Output
_____no_output_____
###Markdown
Now let's feed these tweaked output vectors to the decoder and get the reconstructions it produces:
###Code
tweak_labels = np.tile(mnist.test.labels[:n_samples], caps2_n_dims * n_steps)
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
decoder_output_value = sess.run(
decoder_output,
feed_dict={caps2_output: tweaked_vectors_reshaped,
mask_with_labels: True,
y: tweak_labels})
###Output
INFO:tensorflow:Restoring parameters from ./my_capsule_network
###Markdown
Let's reshape the decoder's output so we can easily iterate on the output dimension, the tweak steps, and the instances:
###Code
tweak_reconstructions = decoder_output_value.reshape(
[caps2_n_dims, n_steps, n_samples, 28, 28])
###Output
_____no_output_____
###Markdown
Lastly, let's plot all the reconstructions, for the first 3 output dimensions, for each tweaking step (column) and each digit (row):
###Code
for dim in range(3):
print("Tweaking output dimension #{}".format(dim))
plt.figure(figsize=(n_steps / 1.2, n_samples / 1.5))
for row in range(n_samples):
for col in range(n_steps):
plt.subplot(n_samples, n_steps, row * n_steps + col + 1)
plt.imshow(tweak_reconstructions[dim, col, row], cmap="binary")
plt.axis("off")
plt.show()
###Output
Tweaking output dimension #0
|
courses/machine_learning/deepdive2/structured/solutions/1b_prepare_data_babyweight.ipynb | ###Markdown
LAB 1b: Prepare babyweight dataset.**Learning Objectives**1. Setup up the environment1. Preprocess natality dataset1. Augment natality dataset1. Create the train and eval tables in BigQuery1. Export data from BigQuery to GCS in CSV format Introduction In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform.In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/1b_prepare_data_babyweight.ipynb). Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
%%bash
pip freeze | grep google-cloud-bigquery==1.6.1 || \
pip install google-cloud-bigquery==1.6.1
###Output
google-cloud-bigquery==1.6.1
###Markdown
Import necessary libraries.
###Code
import os
from google.cloud import bigquery
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT NAME
BUCKET = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "cloud-training-demos":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
###Output
_____no_output_____
###Markdown
The source datasetOur dataset is hosted in [BigQuery](https://cloud.google.com/bigquery/). The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click [here](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=publicdata&d=samples&t=natality&page=table) to access the dataset.The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. `weight_pounds` is the target, the continuous value we’ll train a model to predict. Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __babyweight__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
###Code
%%bash
# Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w babyweight)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: babyweight"
bq --location=US mk --dataset \
--description "Babyweight" \
$PROJECT:babyweight
echo "Here are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "Here are your current buckets:"
gsutil ls
fi
###Output
_____no_output_____
###Markdown
Create the training and evaluation data tablesSince there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to `weight_pounds`, `is_male`, `mother_age`, `plurality`, and `gestation_weeks` as well as some simple filtering and a column to hash on for repeatable splitting.* Note: The dataset in the create table code below is the one created previously, e.g. "babyweight". Preprocess and filter datasetWe have some preprocessing and filtering we would like to do to get our data in the right format for training.Preprocessing:* Cast `is_male` from `BOOL` to `STRING`* Cast `plurality` from `INTEGER` to `STRING` where `[1, 2, 3, 4, 5]` becomes `["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]`* Add `hashcolumn` hashing on `year` and `month`Filtering:* Only want data for years later than `2000`* Only want baby weights greater than `0`* Only want mothers whose age is greater than `0`* Only want plurality to be greater than `0`* Only want the number of weeks of gestation to be greater than `0`
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data AS
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
mother_age,
CASE
WHEN plurality = 1 THEN "Single(1)"
WHEN plurality = 2 THEN "Twins(2)"
WHEN plurality = 3 THEN "Triplets(3)"
WHEN plurality = 4 THEN "Quadruplets(4)"
WHEN plurality = 5 THEN "Quintuplets(5)"
END AS plurality,
gestation_weeks,
ABS(
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING)
)
)
) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
###Output
_____no_output_____
###Markdown
Augment dataset to simulate missing dataNow we want to augment our dataset with our simulated babyweight data by setting all gender information to `Unknown` and setting plurality of all non-single births to `Multiple(2+)`.
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_augmented_data AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
UNION ALL
SELECT
weight_pounds,
"Unknown" AS is_male,
mother_age,
CASE
WHEN plurality = "Single(1)" THEN plurality
ELSE "Multiple(2+)"
END AS plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
###Output
_____no_output_____
###Markdown
Split augmented dataset into train and eval setsUsing `hashmonth`, apply a modulo to get approximately a 75/25 train/eval split. Split augmented dataset into train dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_train AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
MOD(hashmonth, 4) < 3
###Output
_____no_output_____
###Markdown
Split augmented dataset into eval dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_eval AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
MOD(hashmonth, 4) = 3
###Output
_____no_output_____
###Markdown
Verify table creationVerify that you created the dataset and training data table.
###Code
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
###Output
_____no_output_____
###Markdown
Export from BigQuery to CSVs in GCSUse BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.
###Code
# Construct a BigQuery client object.
client = bigquery.Client()
dataset_name = "babyweight"
# Create dataset reference object
dataset_ref = client.dataset(
dataset_id=dataset_name, project=client.project)
# Export both train and eval tables
for step in ["train", "eval"]:
destination_uri = os.path.join(
"gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
table_name = "babyweight_data_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, dataset_name, table_name, destination_uri))
###Output
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_train to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/train*.csv
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_eval to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/eval*.csv
###Markdown
Verify CSV creationVerify that we correctly created the CSV files in our bucket.
###Code
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5
###Output
weight_pounds,is_male,mother_age,plurality,gestation_weeks
2.74916440714,false,44,Single(1),30
3.68833364326,true,42,Single(1),31
9.49971886958,false,15,Single(1),46
8.4437046346,Unknown,15,Single(1),31
###Markdown
LAB 1b: Prepare babyweight dataset.**Learning Objectives**1. Setup up the environment1. Preprocess natality dataset1. Augment natality dataset1. Create the train and eval tables in BigQuery1. Export data from BigQuery to GCS in CSV format Introduction In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform.In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/1b_prepare_data_babyweight.ipynb). Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%%bash
!pip install --user google-cloud-bigquery==1.25.0
###Output
Collecting google-cloud-bigquery==1.25.0
Downloading google_cloud_bigquery-1.25.0-py2.py3-none-any.whl (169 kB)
|████████████████████████████████| 169 kB 4.8 MB/s eta 0:00:01
Requirement already satisfied: protobuf>=3.6.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (3.13.0)
Requirement already satisfied: six<2.0.0dev,>=1.13.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.15.0)
Requirement already satisfied: google-api-core<2.0dev,>=1.15.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.22.1)
Collecting google-resumable-media<0.6dev,>=0.5.0
Downloading google_resumable_media-0.5.1-py2.py3-none-any.whl (38 kB)
Requirement already satisfied: google-auth<2.0dev,>=1.9.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.20.1)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.1.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.3.0)
Requirement already satisfied: setuptools in /opt/conda/lib/python3.7/site-packages (from protobuf>=3.6.0->google-cloud-bigquery==1.25.0) (49.6.0.post20200814)
Requirement already satisfied: pytz in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2020.1)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.51.0)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.24.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.1.1)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= 3.5 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.6)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.2.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.25.10)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2020.6.20)
Requirement already satisfied: pyasn1>=0.1.3 in /opt/conda/lib/python3.7/site-packages (from rsa<5,>=3.1.4; python_version >= 3.5->google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.4.8)
Installing collected packages: google-resumable-media, google-cloud-bigquery
ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts.
We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default.
google-cloud-storage 1.30.0 requires google-resumable-media<2.0dev,>=0.6.0, but you'll have google-resumable-media 0.5.1 which is incompatible.
Successfully installed google-cloud-bigquery-1.25.0 google-resumable-media-0.5.1
###Markdown
**Note**: Restart your kernel to use updated packages. Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage. Import necessary libraries.
###Code
import os
from google.cloud import bigquery
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "BUCKET" # REPLACE WITH YOUR PROJECT NAME, DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT if BUCKET == "BUCKET" else BUCKET # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "cloud-training-demos":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
###Output
_____no_output_____
###Markdown
The source datasetOur dataset is hosted in [BigQuery](https://cloud.google.com/bigquery/). The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click [here](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=publicdata&d=samples&t=natality&page=table) to access the dataset.The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. `weight_pounds` is the target, the continuous value we’ll train a model to predict. Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __babyweight__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
###Code
%%bash
# Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w babyweight)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: babyweight"
bq --location=US mk --dataset \
--description "Babyweight" \
$PROJECT:babyweight
echo "Here are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "Here are your current buckets:"
gsutil ls
fi
###Output
_____no_output_____
###Markdown
Create the training and evaluation data tablesSince there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to `weight_pounds`, `is_male`, `mother_age`, `plurality`, and `gestation_weeks` as well as some simple filtering and a column to hash on for repeatable splitting.* Note: The dataset in the create table code below is the one created previously, e.g. "babyweight". Preprocess and filter datasetWe have some preprocessing and filtering we would like to do to get our data in the right format for training.Preprocessing:* Cast `is_male` from `BOOL` to `STRING`* Cast `plurality` from `INTEGER` to `STRING` where `[1, 2, 3, 4, 5]` becomes `["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]`* Add `hashcolumn` hashing on `year` and `month`Filtering:* Only want data for years later than `2000`* Only want baby weights greater than `0`* Only want mothers whose age is greater than `0`* Only want plurality to be greater than `0`* Only want the number of weeks of gestation to be greater than `0`
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data AS
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
mother_age,
CASE
WHEN plurality = 1 THEN "Single(1)"
WHEN plurality = 2 THEN "Twins(2)"
WHEN plurality = 3 THEN "Triplets(3)"
WHEN plurality = 4 THEN "Quadruplets(4)"
WHEN plurality = 5 THEN "Quintuplets(5)"
END AS plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING)
)
) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
###Output
_____no_output_____
###Markdown
Augment dataset to simulate missing dataNow we want to augment our dataset with our simulated babyweight data by setting all gender information to `Unknown` and setting plurality of all non-single births to `Multiple(2+)`.
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_augmented_data AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
UNION ALL
SELECT
weight_pounds,
"Unknown" AS is_male,
mother_age,
CASE
WHEN plurality = "Single(1)" THEN plurality
ELSE "Multiple(2+)"
END AS plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
###Output
_____no_output_____
###Markdown
Split augmented dataset into train and eval setsUsing `hashmonth`, apply a modulo to get approximately a 75/25 train/eval split. Split augmented dataset into train dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_train AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) < 3
###Output
_____no_output_____
###Markdown
Split augmented dataset into eval dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_eval AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) = 3
###Output
_____no_output_____
###Markdown
Verify table creationVerify that you created the dataset and training data table.
###Code
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
###Output
_____no_output_____
###Markdown
Export from BigQuery to CSVs in GCSUse BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.
###Code
# Construct a BigQuery client object.
client = bigquery.Client()
dataset_name = "babyweight"
# Create dataset reference object
dataset_ref = client.dataset(
dataset_id=dataset_name, project=client.project)
# Export both train and eval tables
for step in ["train", "eval"]:
destination_uri = os.path.join(
"gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
table_name = "babyweight_data_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, dataset_name, table_name, destination_uri))
###Output
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_train to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/train*.csv
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_eval to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/eval*.csv
###Markdown
Verify CSV creationVerify that we correctly created the CSV files in our bucket.
###Code
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5
###Output
weight_pounds,is_male,mother_age,plurality,gestation_weeks
2.74916440714,false,44,Single(1),30
3.68833364326,true,42,Single(1),31
9.49971886958,false,15,Single(1),46
8.4437046346,Unknown,15,Single(1),31
###Markdown
LAB 1b: Prepare babyweight dataset.**Learning Objectives**1. Setup up the environment1. Preprocess natality dataset1. Augment natality dataset1. Create the train and eval tables in BigQuery1. Export data from BigQuery to GCS in CSV format Introduction In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform.In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/1b_prepare_data_babyweight.ipynb). Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
%%bash
sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \
sudo pip install google-cloud-bigquery==1.6.1
###Output
google-cloud-bigquery==1.6.1
###Markdown
Import necessary libraries.
###Code
import os
from google.cloud import bigquery
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "BUCKET" # REPLACE WITH YOUR PROJECT NAME, DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT if BUCKET == "BUCKET" else BUCKET # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "cloud-training-demos":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
###Output
_____no_output_____
###Markdown
The source datasetOur dataset is hosted in [BigQuery](https://cloud.google.com/bigquery/). The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click [here](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=publicdata&d=samples&t=natality&page=table) to access the dataset.The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. `weight_pounds` is the target, the continuous value we’ll train a model to predict. Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __babyweight__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
###Code
%%bash
# Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w babyweight)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: babyweight"
bq --location=US mk --dataset \
--description "Babyweight" \
$PROJECT:babyweight
echo "Here are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "Here are your current buckets:"
gsutil ls
fi
###Output
_____no_output_____
###Markdown
Create the training and evaluation data tablesSince there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to `weight_pounds`, `is_male`, `mother_age`, `plurality`, and `gestation_weeks` as well as some simple filtering and a column to hash on for repeatable splitting.* Note: The dataset in the create table code below is the one created previously, e.g. "babyweight". Preprocess and filter datasetWe have some preprocessing and filtering we would like to do to get our data in the right format for training.Preprocessing:* Cast `is_male` from `BOOL` to `STRING`* Cast `plurality` from `INTEGER` to `STRING` where `[1, 2, 3, 4, 5]` becomes `["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]`* Add `hashcolumn` hashing on `year` and `month`Filtering:* Only want data for years later than `2000`* Only want baby weights greater than `0`* Only want mothers whose age is greater than `0`* Only want plurality to be greater than `0`* Only want the number of weeks of gestation to be greater than `0`
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data AS
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
mother_age,
CASE
WHEN plurality = 1 THEN "Single(1)"
WHEN plurality = 2 THEN "Twins(2)"
WHEN plurality = 3 THEN "Triplets(3)"
WHEN plurality = 4 THEN "Quadruplets(4)"
WHEN plurality = 5 THEN "Quintuplets(5)"
END AS plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING)
)
) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
###Output
_____no_output_____
###Markdown
Augment dataset to simulate missing dataNow we want to augment our dataset with our simulated babyweight data by setting all gender information to `Unknown` and setting plurality of all non-single births to `Multiple(2+)`.
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_augmented_data AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
UNION ALL
SELECT
weight_pounds,
"Unknown" AS is_male,
mother_age,
CASE
WHEN plurality = "Single(1)" THEN plurality
ELSE "Multiple(2+)"
END AS plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
###Output
_____no_output_____
###Markdown
Split augmented dataset into train and eval setsUsing `hashmonth`, apply a modulo to get approximately a 75/25 train/eval split. Split augmented dataset into train dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_train AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) < 3
###Output
_____no_output_____
###Markdown
Split augmented dataset into eval dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_eval AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) = 3
###Output
_____no_output_____
###Markdown
Verify table creationVerify that you created the dataset and training data table.
###Code
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
###Output
_____no_output_____
###Markdown
Export from BigQuery to CSVs in GCSUse BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.
###Code
# Construct a BigQuery client object.
client = bigquery.Client()
dataset_name = "babyweight"
# Create dataset reference object
dataset_ref = client.dataset(
dataset_id=dataset_name, project=client.project)
# Export both train and eval tables
for step in ["train", "eval"]:
destination_uri = os.path.join(
"gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
table_name = "babyweight_data_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, dataset_name, table_name, destination_uri))
###Output
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_train to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/train*.csv
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_eval to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/eval*.csv
###Markdown
Verify CSV creationVerify that we correctly created the CSV files in our bucket.
###Code
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5
###Output
weight_pounds,is_male,mother_age,plurality,gestation_weeks
2.74916440714,false,44,Single(1),30
3.68833364326,true,42,Single(1),31
9.49971886958,false,15,Single(1),46
8.4437046346,Unknown,15,Single(1),31
###Markdown
LAB 1b: Prepare babyweight dataset.**Learning Objectives**1. Setup up the environment1. Preprocess natality dataset1. Augment natality dataset1. Create the train and eval tables in BigQuery1. Export data from BigQuery to GCS in CSV format Introduction In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform.In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/2_prepare_babyweight.ipynb). Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
%%bash
pip freeze | grep google-cloud-bigquery==1.6.1 || \
pip install google-cloud-bigquery==1.6.1
###Output
google-cloud-bigquery==1.6.1
###Markdown
Import necessary libraries.
###Code
import os
from google.cloud import bigquery
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT NAME
BUCKET = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "cloud-training-demos":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
###Output
_____no_output_____
###Markdown
The source datasetOur dataset is hosted in [BigQuery](https://cloud.google.com/bigquery/). The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click [here](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=publicdata&d=samples&t=natality&page=table) to access the dataset.The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. `weight_pounds` is the target, the continuous value we’ll train a model to predict. Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __babyweight__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
###Code
%%bash
# Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w babyweight)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: babyweight"
bq --location=US mk --dataset \
--description "Babyweight" \
$PROJECT:babyweight
echo "Here are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "Here are your current buckets:"
gsutil ls
fi
###Output
_____no_output_____
###Markdown
Create the training and evaluation data tablesSince there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to `weight_pounds`, `is_male`, `mother_age`, `plurality`, and `gestation_weeks` as well as some simple filtering and a column to hash on for repeatable splitting.* Note: The dataset in the create table code below is the one created previously, e.g. "babyweight". Preprocess and filter datasetWe have some preprocessing and filtering we would like to do to get our data in the right format for training.Preprocessing:* Cast `is_male` from `BOOL` to `STRING`* Cast `plurality` from `INTEGER` to `STRING` where `[1, 2, 3, 4, 5]` becomes `["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]`* Add `hashcolumn` hashing on `year` and `month`Filtering:* Only want data for years later than `2000`* Only want baby weights greater than `0`* Only want mothers whose age is greater than `0`* Only want plurality to be greater than `0`* Only want the number of weeks of gestation to be greater than `0`
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data AS
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
mother_age,
CASE
WHEN plurality = 1 THEN "Single(1)"
WHEN plurality = 2 THEN "Twins(2)"
WHEN plurality = 3 THEN "Triplets(3)"
WHEN plurality = 4 THEN "Quadruplets(4)"
WHEN plurality = 5 THEN "Quintuplets(5)"
END AS plurality,
gestation_weeks,
ABS(
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING)
)
)
) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
###Output
_____no_output_____
###Markdown
Augment dataset to simulate missing dataNow we want to augment our dataset with our simulated babyweight data by setting all gender information to `Unknown` and setting plurality of all non-single births to `Multiple(2+)`.
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_augmented_data AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
UNION ALL
SELECT
weight_pounds,
"Unknown" AS is_male,
mother_age,
CASE
WHEN plurality = "Single(1)" THEN plurality
ELSE "Multiple(2+)"
END AS plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
###Output
_____no_output_____
###Markdown
Split augmented dataset into train and eval setsUsing `hashmonth`, apply a modulo to get approximately a 75/25 train/eval split. Split augmented dataset into train dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_train AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
MOD(hashmonth, 4) < 3
###Output
_____no_output_____
###Markdown
Split augmented dataset into eval dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_eval AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
MOD(hashmonth, 4) = 3
###Output
_____no_output_____
###Markdown
Verify table creationVerify that you created the dataset and training data table.
###Code
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
###Output
_____no_output_____
###Markdown
Export from BigQuery to CSVs in GCSUse BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.
###Code
# Construct a BigQuery client object.
client = bigquery.Client()
dataset_name = "babyweight"
# Create dataset reference object
dataset_ref = client.dataset(
dataset_id=dataset_name, project=client.project)
# Export both train and eval tables
for step in ["train", "eval"]:
destination_uri = os.path.join(
"gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
table_name = "babyweight_data_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, dataset_name, table_name, destination_uri))
###Output
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_train to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/train*.csv
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_eval to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/eval*.csv
###Markdown
Verify CSV creationVerify that we correctly created the CSV files in our bucket.
###Code
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5
###Output
weight_pounds,is_male,mother_age,plurality,gestation_weeks
2.74916440714,false,44,Single(1),30
3.68833364326,true,42,Single(1),31
9.49971886958,false,15,Single(1),46
8.4437046346,Unknown,15,Single(1),31
###Markdown
LAB 1b: Prepare babyweight dataset.**Learning Objectives**1. Setup up the environment1. Preprocess natality dataset1. Augment natality dataset1. Create the train and eval tables in BigQuery1. Export data from BigQuery to GCS in CSV format Introduction In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform.In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/1b_prepare_data_babyweight.ipynb). Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
%%bash
pip freeze | grep google-cloud-bigquery==1.6.1 || \
pip install google-cloud-bigquery==1.6.1
###Output
google-cloud-bigquery==1.6.1
###Markdown
Import necessary libraries.
###Code
import os
from google.cloud import bigquery
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT NAME
BUCKET = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "cloud-training-demos":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
###Output
_____no_output_____
###Markdown
The source datasetOur dataset is hosted in [BigQuery](https://cloud.google.com/bigquery/). The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click [here](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=publicdata&d=samples&t=natality&page=table) to access the dataset.The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. `weight_pounds` is the target, the continuous value we’ll train a model to predict. Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __babyweight__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
###Code
%%bash
# Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w babyweight)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: babyweight"
bq --location=US mk --dataset \
--description "Babyweight" \
$PROJECT:babyweight
echo "Here are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "Here are your current buckets:"
gsutil ls
fi
###Output
_____no_output_____
###Markdown
Create the training and evaluation data tablesSince there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to `weight_pounds`, `is_male`, `mother_age`, `plurality`, and `gestation_weeks` as well as some simple filtering and a column to hash on for repeatable splitting.* Note: The dataset in the create table code below is the one created previously, e.g. "babyweight". Preprocess and filter datasetWe have some preprocessing and filtering we would like to do to get our data in the right format for training.Preprocessing:* Cast `is_male` from `BOOL` to `STRING`* Cast `plurality` from `INTEGER` to `STRING` where `[1, 2, 3, 4, 5]` becomes `["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]`* Add `hashcolumn` hashing on `year` and `month`Filtering:* Only want data for years later than `2000`* Only want baby weights greater than `0`* Only want mothers whose age is greater than `0`* Only want plurality to be greater than `0`* Only want the number of weeks of gestation to be greater than `0`
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data AS
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
mother_age,
CASE
WHEN plurality = 1 THEN "Single(1)"
WHEN plurality = 2 THEN "Twins(2)"
WHEN plurality = 3 THEN "Triplets(3)"
WHEN plurality = 4 THEN "Quadruplets(4)"
WHEN plurality = 5 THEN "Quintuplets(5)"
END AS plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING)
)
) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
###Output
_____no_output_____
###Markdown
Augment dataset to simulate missing dataNow we want to augment our dataset with our simulated babyweight data by setting all gender information to `Unknown` and setting plurality of all non-single births to `Multiple(2+)`.
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_augmented_data AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
UNION ALL
SELECT
weight_pounds,
"Unknown" AS is_male,
mother_age,
CASE
WHEN plurality = "Single(1)" THEN plurality
ELSE "Multiple(2+)"
END AS plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
###Output
_____no_output_____
###Markdown
Split augmented dataset into train and eval setsUsing `hashmonth`, apply a modulo to get approximately a 75/25 train/eval split. Split augmented dataset into train dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_train AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) < 3
###Output
_____no_output_____
###Markdown
Split augmented dataset into eval dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_eval AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) = 3
###Output
_____no_output_____
###Markdown
Verify table creationVerify that you created the dataset and training data table.
###Code
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
###Output
_____no_output_____
###Markdown
Export from BigQuery to CSVs in GCSUse BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.
###Code
# Construct a BigQuery client object.
client = bigquery.Client()
dataset_name = "babyweight"
# Create dataset reference object
dataset_ref = client.dataset(
dataset_id=dataset_name, project=client.project)
# Export both train and eval tables
for step in ["train", "eval"]:
destination_uri = os.path.join(
"gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
table_name = "babyweight_data_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, dataset_name, table_name, destination_uri))
###Output
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_train to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/train*.csv
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_eval to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/eval*.csv
###Markdown
Verify CSV creationVerify that we correctly created the CSV files in our bucket.
###Code
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5
###Output
weight_pounds,is_male,mother_age,plurality,gestation_weeks
2.74916440714,false,44,Single(1),30
3.68833364326,true,42,Single(1),31
9.49971886958,false,15,Single(1),46
8.4437046346,Unknown,15,Single(1),31
###Markdown
LAB 1b: Prepare babyweight dataset.**Learning Objectives**1. Setup up the environment1. Preprocess natality dataset1. Augment natality dataset1. Create the train and eval tables in BigQuery1. Export data from BigQuery to GCS in CSV format Introduction In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform.In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/1b_prepare_data_babyweight.ipynb). Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%%bash
sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \
sudo pip install google-cloud-bigquery==1.6.1
###Output
google-cloud-bigquery==1.6.1
###Markdown
Import necessary libraries.
###Code
import os
from google.cloud import bigquery
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "BUCKET" # REPLACE WITH YOUR PROJECT NAME, DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT if BUCKET == "BUCKET" else BUCKET # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "cloud-training-demos":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
###Output
_____no_output_____
###Markdown
The source datasetOur dataset is hosted in [BigQuery](https://cloud.google.com/bigquery/). The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click [here](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=publicdata&d=samples&t=natality&page=table) to access the dataset.The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. `weight_pounds` is the target, the continuous value we’ll train a model to predict. Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __babyweight__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
###Code
%%bash
# Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w babyweight)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: babyweight"
bq --location=US mk --dataset \
--description "Babyweight" \
$PROJECT:babyweight
echo "Here are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "Here are your current buckets:"
gsutil ls
fi
###Output
_____no_output_____
###Markdown
Create the training and evaluation data tablesSince there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to `weight_pounds`, `is_male`, `mother_age`, `plurality`, and `gestation_weeks` as well as some simple filtering and a column to hash on for repeatable splitting.* Note: The dataset in the create table code below is the one created previously, e.g. "babyweight". Preprocess and filter datasetWe have some preprocessing and filtering we would like to do to get our data in the right format for training.Preprocessing:* Cast `is_male` from `BOOL` to `STRING`* Cast `plurality` from `INTEGER` to `STRING` where `[1, 2, 3, 4, 5]` becomes `["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]`* Add `hashcolumn` hashing on `year` and `month`Filtering:* Only want data for years later than `2000`* Only want baby weights greater than `0`* Only want mothers whose age is greater than `0`* Only want plurality to be greater than `0`* Only want the number of weeks of gestation to be greater than `0`
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data AS
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
mother_age,
CASE
WHEN plurality = 1 THEN "Single(1)"
WHEN plurality = 2 THEN "Twins(2)"
WHEN plurality = 3 THEN "Triplets(3)"
WHEN plurality = 4 THEN "Quadruplets(4)"
WHEN plurality = 5 THEN "Quintuplets(5)"
END AS plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING)
)
) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
###Output
_____no_output_____
###Markdown
Augment dataset to simulate missing dataNow we want to augment our dataset with our simulated babyweight data by setting all gender information to `Unknown` and setting plurality of all non-single births to `Multiple(2+)`.
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_augmented_data AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
UNION ALL
SELECT
weight_pounds,
"Unknown" AS is_male,
mother_age,
CASE
WHEN plurality = "Single(1)" THEN plurality
ELSE "Multiple(2+)"
END AS plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
###Output
_____no_output_____
###Markdown
Split augmented dataset into train and eval setsUsing `hashmonth`, apply a modulo to get approximately a 75/25 train/eval split. Split augmented dataset into train dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_train AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) < 3
###Output
_____no_output_____
###Markdown
Split augmented dataset into eval dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_eval AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) = 3
###Output
_____no_output_____
###Markdown
Verify table creationVerify that you created the dataset and training data table.
###Code
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
###Output
_____no_output_____
###Markdown
Export from BigQuery to CSVs in GCSUse BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.
###Code
# Construct a BigQuery client object.
client = bigquery.Client()
dataset_name = "babyweight"
# Create dataset reference object
dataset_ref = client.dataset(
dataset_id=dataset_name, project=client.project)
# Export both train and eval tables
for step in ["train", "eval"]:
destination_uri = os.path.join(
"gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
table_name = "babyweight_data_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, dataset_name, table_name, destination_uri))
###Output
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_train to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/train*.csv
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_eval to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/eval*.csv
###Markdown
Verify CSV creationVerify that we correctly created the CSV files in our bucket.
###Code
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5
###Output
weight_pounds,is_male,mother_age,plurality,gestation_weeks
2.74916440714,false,44,Single(1),30
3.68833364326,true,42,Single(1),31
9.49971886958,false,15,Single(1),46
8.4437046346,Unknown,15,Single(1),31
###Markdown
LAB 1b: Prepare babyweight dataset.**Learning Objectives**1. Setup up the environment1. Preprocess natality dataset1. Augment natality dataset1. Create the train and eval tables in BigQuery1. Export data from BigQuery to GCS in CSV format Introduction In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform.In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/1b_prepare_data_babyweight.ipynb). Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==1.25.0
###Output
Collecting google-cloud-bigquery==1.25.0
Downloading google_cloud_bigquery-1.25.0-py2.py3-none-any.whl (169 kB)
|████████████████████████████████| 169 kB 4.8 MB/s eta 0:00:01
Requirement already satisfied: protobuf>=3.6.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (3.13.0)
Requirement already satisfied: six<2.0.0dev,>=1.13.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.15.0)
Requirement already satisfied: google-api-core<2.0dev,>=1.15.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.22.1)
Collecting google-resumable-media<0.6dev,>=0.5.0
Downloading google_resumable_media-0.5.1-py2.py3-none-any.whl (38 kB)
Requirement already satisfied: google-auth<2.0dev,>=1.9.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.20.1)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.1.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.3.0)
Requirement already satisfied: setuptools in /opt/conda/lib/python3.7/site-packages (from protobuf>=3.6.0->google-cloud-bigquery==1.25.0) (49.6.0.post20200814)
Requirement already satisfied: pytz in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2020.1)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.51.0)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.24.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.1.1)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= 3.5 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.6)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.2.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.25.10)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2020.6.20)
Requirement already satisfied: pyasn1>=0.1.3 in /opt/conda/lib/python3.7/site-packages (from rsa<5,>=3.1.4; python_version >= 3.5->google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.4.8)
Installing collected packages: google-resumable-media, google-cloud-bigquery
ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts.
We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default.
google-cloud-storage 1.30.0 requires google-resumable-media<2.0dev,>=0.6.0, but you'll have google-resumable-media 0.5.1 which is incompatible.
Successfully installed google-cloud-bigquery-1.25.0 google-resumable-media-0.5.1
###Markdown
**Note**: Restart your kernel to use updated packages. Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage. Import necessary libraries.
###Code
import os
from google.cloud import bigquery
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "BUCKET" # REPLACE WITH YOUR PROJECT NAME, DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT if BUCKET == "BUCKET" else BUCKET # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "cloud-training-demos":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
###Output
_____no_output_____
###Markdown
The source datasetOur dataset is hosted in [BigQuery](https://cloud.google.com/bigquery/). The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click [here](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=publicdata&d=samples&t=natality&page=table) to access the dataset.The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. `weight_pounds` is the target, the continuous value we’ll train a model to predict. Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __babyweight__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
###Code
%%bash
# Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w babyweight)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: babyweight"
bq --location=US mk --dataset \
--description "Babyweight" \
$PROJECT:babyweight
echo "Here are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "Here are your current buckets:"
gsutil ls
fi
###Output
_____no_output_____
###Markdown
Create the training and evaluation data tablesSince there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to `weight_pounds`, `is_male`, `mother_age`, `plurality`, and `gestation_weeks` as well as some simple filtering and a column to hash on for repeatable splitting.* Note: The dataset in the create table code below is the one created previously, e.g. "babyweight". Preprocess and filter datasetWe have some preprocessing and filtering we would like to do to get our data in the right format for training.Preprocessing:* Cast `is_male` from `BOOL` to `STRING`* Cast `plurality` from `INTEGER` to `STRING` where `[1, 2, 3, 4, 5]` becomes `["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]`* Add `hashcolumn` hashing on `year` and `month`Filtering:* Only want data for years later than `2000`* Only want baby weights greater than `0`* Only want mothers whose age is greater than `0`* Only want plurality to be greater than `0`* Only want the number of weeks of gestation to be greater than `0`
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data AS
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
mother_age,
CASE
WHEN plurality = 1 THEN "Single(1)"
WHEN plurality = 2 THEN "Twins(2)"
WHEN plurality = 3 THEN "Triplets(3)"
WHEN plurality = 4 THEN "Quadruplets(4)"
WHEN plurality = 5 THEN "Quintuplets(5)"
END AS plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING)
)
) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
###Output
_____no_output_____
###Markdown
Augment dataset to simulate missing dataNow we want to augment our dataset with our simulated babyweight data by setting all gender information to `Unknown` and setting plurality of all non-single births to `Multiple(2+)`.
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_augmented_data AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
UNION ALL
SELECT
weight_pounds,
"Unknown" AS is_male,
mother_age,
CASE
WHEN plurality = "Single(1)" THEN plurality
ELSE "Multiple(2+)"
END AS plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
###Output
_____no_output_____
###Markdown
Split augmented dataset into train and eval setsUsing `hashmonth`, apply a modulo to get approximately a 75/25 train/eval split. Split augmented dataset into train dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_train AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) < 3
###Output
_____no_output_____
###Markdown
Split augmented dataset into eval dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_eval AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) = 3
###Output
_____no_output_____
###Markdown
Verify table creationVerify that you created the dataset and training data table.
###Code
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
###Output
_____no_output_____
###Markdown
Export from BigQuery to CSVs in GCSUse BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.
###Code
# Construct a BigQuery client object.
client = bigquery.Client()
dataset_name = "babyweight"
# Create dataset reference object
dataset_ref = client.dataset(
dataset_id=dataset_name, project=client.project)
# Export both train and eval tables
for step in ["train", "eval"]:
destination_uri = os.path.join(
"gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
table_name = "babyweight_data_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, dataset_name, table_name, destination_uri))
###Output
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_train to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/train*.csv
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_eval to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/eval*.csv
###Markdown
Verify CSV creationVerify that we correctly created the CSV files in our bucket.
###Code
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5
###Output
weight_pounds,is_male,mother_age,plurality,gestation_weeks
2.74916440714,false,44,Single(1),30
3.68833364326,true,42,Single(1),31
9.49971886958,false,15,Single(1),46
8.4437046346,Unknown,15,Single(1),31
###Markdown
LAB 1b: Prepare babyweight dataset.**Learning Objectives**1. Setup up the environment1. Preprocess natality dataset1. Augment natality dataset1. Create the train and eval tables in BigQuery1. Export data from BigQuery to GCS in CSV format Introduction In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform.In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/1b_prepare_data_babyweight.ipynb). Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==1.25.0
###Output
Collecting google-cloud-bigquery==1.25.0
Downloading google_cloud_bigquery-1.25.0-py2.py3-none-any.whl (169 kB)
|████████████████████████████████| 169 kB 4.8 MB/s eta 0:00:01
Requirement already satisfied: protobuf>=3.6.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (3.13.0)
Requirement already satisfied: six<2.0.0dev,>=1.13.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.15.0)
Requirement already satisfied: google-api-core<2.0dev,>=1.15.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.22.1)
Collecting google-resumable-media<0.6dev,>=0.5.0
Downloading google_resumable_media-0.5.1-py2.py3-none-any.whl (38 kB)
Requirement already satisfied: google-auth<2.0dev,>=1.9.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.20.1)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.1.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.3.0)
Requirement already satisfied: setuptools in /opt/conda/lib/python3.7/site-packages (from protobuf>=3.6.0->google-cloud-bigquery==1.25.0) (49.6.0.post20200814)
Requirement already satisfied: pytz in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2020.1)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.51.0)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.24.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.1.1)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= 3.5 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.6)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.2.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.25.10)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2020.6.20)
Requirement already satisfied: pyasn1>=0.1.3 in /opt/conda/lib/python3.7/site-packages (from rsa<5,>=3.1.4; python_version >= 3.5->google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.4.8)
Installing collected packages: google-resumable-media, google-cloud-bigquery
ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts.
We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default.
google-cloud-storage 1.30.0 requires google-resumable-media<2.0dev,>=0.6.0, but you'll have google-resumable-media 0.5.1 which is incompatible.
Successfully installed google-cloud-bigquery-1.25.0 google-resumable-media-0.5.1
###Markdown
**Note**: Restart your kernel to use updated packages. Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage. Import necessary libraries.
###Code
import os
from google.cloud import bigquery
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "BUCKET" # REPLACE WITH YOUR PROJECT NAME, DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT if BUCKET == "BUCKET" else BUCKET # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "cloud-training-demos":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
###Output
_____no_output_____
###Markdown
The source datasetOur dataset is hosted in [BigQuery](https://cloud.google.com/bigquery/). The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click [here](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=publicdata&d=samples&t=natality&page=table) to access the dataset.The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. `weight_pounds` is the target, the continuous value we’ll train a model to predict. Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __babyweight__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
###Code
%%bash
# Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w babyweight)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: babyweight"
bq --location=US mk --dataset \
--description "Babyweight" \
$PROJECT:babyweight
echo "Here are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "Here are your current buckets:"
gsutil ls
fi
###Output
_____no_output_____
###Markdown
Create the training and evaluation data tablesSince there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to `weight_pounds`, `is_male`, `mother_age`, `plurality`, and `gestation_weeks` as well as some simple filtering and a column to hash on for repeatable splitting.* Note: The dataset in the create table code below is the one created previously, e.g. "babyweight". Preprocess and filter datasetWe have some preprocessing and filtering we would like to do to get our data in the right format for training.Preprocessing:* Cast `is_male` from `BOOL` to `STRING`* Cast `plurality` from `INTEGER` to `STRING` where `[1, 2, 3, 4, 5]` becomes `["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]`* Add `hashcolumn` hashing on `year` and `month`Filtering:* Only want data for years later than `2000`* Only want baby weights greater than `0`* Only want mothers whose age is greater than `0`* Only want plurality to be greater than `0`* Only want the number of weeks of gestation to be greater than `0`
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data AS
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
mother_age,
CASE
WHEN plurality = 1 THEN "Single(1)"
WHEN plurality = 2 THEN "Twins(2)"
WHEN plurality = 3 THEN "Triplets(3)"
WHEN plurality = 4 THEN "Quadruplets(4)"
WHEN plurality = 5 THEN "Quintuplets(5)"
END AS plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING)
)
) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
###Output
_____no_output_____
###Markdown
Augment dataset to simulate missing dataNow we want to augment our dataset with our simulated babyweight data by setting all gender information to `Unknown` and setting plurality of all non-single births to `Multiple(2+)`.
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_augmented_data AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
UNION ALL
SELECT
weight_pounds,
"Unknown" AS is_male,
mother_age,
CASE
WHEN plurality = "Single(1)" THEN plurality
ELSE "Multiple(2+)"
END AS plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
###Output
_____no_output_____
###Markdown
Split augmented dataset into train and eval setsUsing `hashmonth`, apply a modulo to get approximately a 75/25 train/eval split. Split augmented dataset into train dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_train AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) < 3
###Output
_____no_output_____
###Markdown
Split augmented dataset into eval dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_eval AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) = 3
###Output
_____no_output_____
###Markdown
Verify table creationVerify that you created the dataset and training data table.
###Code
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
###Output
_____no_output_____
###Markdown
Export from BigQuery to CSVs in GCSUse BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.Update **BUCKET** with your bucket ID
###Code
# Construct a BigQuery client object.
client = bigquery.Client()
dataset_name = "babyweight"
BUCKET = "your bucket id"
# Create dataset reference object
dataset_ref = client.dataset(
dataset_id=dataset_name, project=client.project)
# Export both train and eval tables
for step in ["train", "eval"]:
destination_uri = os.path.join(
"gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
table_name = "babyweight_data_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, dataset_name, table_name, destination_uri))
###Output
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_train to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/train*.csv
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_eval to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/eval*.csv
###Markdown
Verify CSV creationVerify that we correctly created the CSV files in our bucket.
###Code
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5
###Output
weight_pounds,is_male,mother_age,plurality,gestation_weeks
2.74916440714,false,44,Single(1),30
3.68833364326,true,42,Single(1),31
9.49971886958,false,15,Single(1),46
8.4437046346,Unknown,15,Single(1),31
###Markdown
LAB 1b: Prepare babyweight dataset.**Learning Objectives**1. Setup up the environment1. Preprocess natality dataset1. Augment natality dataset1. Create the train and eval tables in BigQuery1. Export data from BigQuery to GCS in CSV format Introduction In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform.In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/1b_prepare_data_babyweight.ipynb). Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==1.25.0
###Output
Collecting google-cloud-bigquery==1.25.0
Downloading google_cloud_bigquery-1.25.0-py2.py3-none-any.whl (169 kB)
|████████████████████████████████| 169 kB 4.8 MB/s eta 0:00:01
Requirement already satisfied: protobuf>=3.6.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (3.13.0)
Requirement already satisfied: six<2.0.0dev,>=1.13.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.15.0)
Requirement already satisfied: google-api-core<2.0dev,>=1.15.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.22.1)
Collecting google-resumable-media<0.6dev,>=0.5.0
Downloading google_resumable_media-0.5.1-py2.py3-none-any.whl (38 kB)
Requirement already satisfied: google-auth<2.0dev,>=1.9.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.20.1)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.1.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.3.0)
Requirement already satisfied: setuptools in /opt/conda/lib/python3.7/site-packages (from protobuf>=3.6.0->google-cloud-bigquery==1.25.0) (49.6.0.post20200814)
Requirement already satisfied: pytz in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2020.1)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.51.0)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.24.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.1.1)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= 3.5 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.6)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.2.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.25.10)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2020.6.20)
Requirement already satisfied: pyasn1>=0.1.3 in /opt/conda/lib/python3.7/site-packages (from rsa<5,>=3.1.4; python_version >= 3.5->google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.4.8)
Installing collected packages: google-resumable-media, google-cloud-bigquery
ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts.
We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default.
google-cloud-storage 1.30.0 requires google-resumable-media<2.0dev,>=0.6.0, but you'll have google-resumable-media 0.5.1 which is incompatible.
Successfully installed google-cloud-bigquery-1.25.0 google-resumable-media-0.5.1
###Markdown
**Note**: Restart your kernel to use updated packages. Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage. Import necessary libraries.
###Code
import os
from google.cloud import bigquery
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "BUCKET" # REPLACE WITH YOUR PROJECT NAME, DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT if BUCKET == "BUCKET" else BUCKET # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "cloud-training-demos":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
###Output
_____no_output_____
###Markdown
The source datasetOur dataset is hosted in [BigQuery](https://cloud.google.com/bigquery/). The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click [here](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=publicdata&d=samples&t=natality&page=table) to access the dataset.The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. `weight_pounds` is the target, the continuous value we’ll train a model to predict. Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __babyweight__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
###Code
%%bash
# Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w babyweight)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: babyweight"
bq --location=US mk --dataset \
--description "Babyweight" \
$PROJECT:babyweight
echo "Here are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "Here are your current buckets:"
gsutil ls
fi
###Output
_____no_output_____
###Markdown
Create the training and evaluation data tablesSince there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to `weight_pounds`, `is_male`, `mother_age`, `plurality`, and `gestation_weeks` as well as some simple filtering and a column to hash on for repeatable splitting.* Note: The dataset in the create table code below is the one created previously, e.g. "babyweight". Preprocess and filter datasetWe have some preprocessing and filtering we would like to do to get our data in the right format for training.Preprocessing:* Cast `is_male` from `BOOL` to `STRING`* Cast `plurality` from `INTEGER` to `STRING` where `[1, 2, 3, 4, 5]` becomes `["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]`* Add `hashcolumn` hashing on `year` and `month`Filtering:* Only want data for years later than `2000`* Only want baby weights greater than `0`* Only want mothers whose age is greater than `0`* Only want plurality to be greater than `0`* Only want the number of weeks of gestation to be greater than `0`
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data AS
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
mother_age,
CASE
WHEN plurality = 1 THEN "Single(1)"
WHEN plurality = 2 THEN "Twins(2)"
WHEN plurality = 3 THEN "Triplets(3)"
WHEN plurality = 4 THEN "Quadruplets(4)"
WHEN plurality = 5 THEN "Quintuplets(5)"
END AS plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING)
)
) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
###Output
_____no_output_____
###Markdown
Augment dataset to simulate missing dataNow we want to augment our dataset with our simulated babyweight data by setting all gender information to `Unknown` and setting plurality of all non-single births to `Multiple(2+)`.
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_augmented_data AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
UNION ALL
SELECT
weight_pounds,
"Unknown" AS is_male,
mother_age,
CASE
WHEN plurality = "Single(1)" THEN plurality
ELSE "Multiple(2+)"
END AS plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
###Output
_____no_output_____
###Markdown
Split augmented dataset into train and eval setsUsing `hashmonth`, apply a modulo to get approximately a 75/25 train/eval split. Split augmented dataset into train dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_train AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) < 3
###Output
_____no_output_____
###Markdown
Split augmented dataset into eval dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_eval AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) = 3
###Output
_____no_output_____
###Markdown
Verify table creationVerify that you created the dataset and training data table.
###Code
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
###Output
_____no_output_____
###Markdown
Export from BigQuery to CSVs in GCSUse BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.
###Code
# Construct a BigQuery client object.
client = bigquery.Client()
dataset_name = "babyweight"
# Create dataset reference object
dataset_ref = client.dataset(
dataset_id=dataset_name, project=client.project)
# Export both train and eval tables
for step in ["train", "eval"]:
destination_uri = os.path.join(
"gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
table_name = "babyweight_data_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, dataset_name, table_name, destination_uri))
###Output
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_train to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/train*.csv
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_eval to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/eval*.csv
###Markdown
Verify CSV creationVerify that we correctly created the CSV files in our bucket.
###Code
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5
###Output
weight_pounds,is_male,mother_age,plurality,gestation_weeks
2.74916440714,false,44,Single(1),30
3.68833364326,true,42,Single(1),31
9.49971886958,false,15,Single(1),46
8.4437046346,Unknown,15,Single(1),31
###Markdown
LAB 1b: Prepare babyweight dataset.**Learning Objectives**1. Setup up the environment1. Preprocess natality dataset1. Augment natality dataset1. Create the train and eval tables in BigQuery1. Export data from BigQuery to GCS in CSV format Introduction In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform.In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/1b_prepare_data_babyweight.ipynb). Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%%bash
!pip install --user google-cloud-bigquery==1.25.0
###Output
Collecting google-cloud-bigquery==1.25.0
Downloading google_cloud_bigquery-1.25.0-py2.py3-none-any.whl (169 kB)
|████████████████████████████████| 169 kB 4.8 MB/s eta 0:00:01
Requirement already satisfied: protobuf>=3.6.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (3.13.0)
Requirement already satisfied: six<2.0.0dev,>=1.13.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.15.0)
Requirement already satisfied: google-api-core<2.0dev,>=1.15.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.22.1)
Collecting google-resumable-media<0.6dev,>=0.5.0
Downloading google_resumable_media-0.5.1-py2.py3-none-any.whl (38 kB)
Requirement already satisfied: google-auth<2.0dev,>=1.9.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.20.1)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.1.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.3.0)
Requirement already satisfied: setuptools in /opt/conda/lib/python3.7/site-packages (from protobuf>=3.6.0->google-cloud-bigquery==1.25.0) (49.6.0.post20200814)
Requirement already satisfied: pytz in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2020.1)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.51.0)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.24.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.1.1)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= 3.5 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.6)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.2.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.25.10)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2020.6.20)
Requirement already satisfied: pyasn1>=0.1.3 in /opt/conda/lib/python3.7/site-packages (from rsa<5,>=3.1.4; python_version >= 3.5->google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.4.8)
Installing collected packages: google-resumable-media, google-cloud-bigquery
ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts.
We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default.
google-cloud-storage 1.30.0 requires google-resumable-media<2.0dev,>=0.6.0, but you'll have google-resumable-media 0.5.1 which is incompatible.
Successfully installed google-cloud-bigquery-1.25.0 google-resumable-media-0.5.1
###Markdown
**Note**: Restart your kernel to use updated packages. Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage. Import necessary libraries.
###Code
import os
from google.cloud import bigquery
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "BUCKET" # REPLACE WITH YOUR PROJECT NAME, DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT if BUCKET == "BUCKET" else BUCKET # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "cloud-training-demos":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
###Output
_____no_output_____
###Markdown
The source datasetOur dataset is hosted in [BigQuery](https://cloud.google.com/bigquery/). The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click [here](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=publicdata&d=samples&t=natality&page=table) to access the dataset.The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. `weight_pounds` is the target, the continuous value we’ll train a model to predict. Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __babyweight__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
###Code
%%bash
# Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w babyweight)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: babyweight"
bq --location=US mk --dataset \
--description "Babyweight" \
$PROJECT:babyweight
echo "Here are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "Here are your current buckets:"
gsutil ls
fi
###Output
_____no_output_____
###Markdown
Create the training and evaluation data tablesSince there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to `weight_pounds`, `is_male`, `mother_age`, `plurality`, and `gestation_weeks` as well as some simple filtering and a column to hash on for repeatable splitting.* Note: The dataset in the create table code below is the one created previously, e.g. "babyweight". Preprocess and filter datasetWe have some preprocessing and filtering we would like to do to get our data in the right format for training.Preprocessing:* Cast `is_male` from `BOOL` to `STRING`* Cast `plurality` from `INTEGER` to `STRING` where `[1, 2, 3, 4, 5]` becomes `["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]`* Add `hashcolumn` hashing on `year` and `month`Filtering:* Only want data for years later than `2000`* Only want baby weights greater than `0`* Only want mothers whose age is greater than `0`* Only want plurality to be greater than `0`* Only want the number of weeks of gestation to be greater than `0`
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data AS
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
mother_age,
CASE
WHEN plurality = 1 THEN "Single(1)"
WHEN plurality = 2 THEN "Twins(2)"
WHEN plurality = 3 THEN "Triplets(3)"
WHEN plurality = 4 THEN "Quadruplets(4)"
WHEN plurality = 5 THEN "Quintuplets(5)"
END AS plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING)
)
) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
###Output
_____no_output_____
###Markdown
Augment dataset to simulate missing dataNow we want to augment our dataset with our simulated babyweight data by setting all gender information to `Unknown` and setting plurality of all non-single births to `Multiple(2+)`.
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_augmented_data AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
UNION ALL
SELECT
weight_pounds,
"Unknown" AS is_male,
mother_age,
CASE
WHEN plurality = "Single(1)" THEN plurality
ELSE "Multiple(2+)"
END AS plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
###Output
_____no_output_____
###Markdown
Split augmented dataset into train and eval setsUsing `hashmonth`, apply a modulo to get approximately a 75/25 train/eval split. Split augmented dataset into train dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_train AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) < 3
###Output
_____no_output_____
###Markdown
Split augmented dataset into eval dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_eval AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) = 3
###Output
_____no_output_____
###Markdown
Verify table creationVerify that you created the dataset and training data table.
###Code
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
###Output
_____no_output_____
###Markdown
Export from BigQuery to CSVs in GCSUse BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.
###Code
# Construct a BigQuery client object.
client = bigquery.Client()
dataset_name = "babyweight"
# Create dataset reference object
dataset_ref = client.dataset(
dataset_id=dataset_name, project=client.project)
# Export both train and eval tables
for step in ["train", "eval"]:
destination_uri = os.path.join(
"gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
table_name = "babyweight_data_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, dataset_name, table_name, destination_uri))
###Output
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_train to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/train*.csv
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_eval to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/eval*.csv
###Markdown
Verify CSV creationVerify that we correctly created the CSV files in our bucket.
###Code
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5
###Output
weight_pounds,is_male,mother_age,plurality,gestation_weeks
2.74916440714,false,44,Single(1),30
3.68833364326,true,42,Single(1),31
9.49971886958,false,15,Single(1),46
8.4437046346,Unknown,15,Single(1),31
|
neuralNetwork.ipynb | ###Markdown
data function
###Code
def YtoArr(y):
arr = []
for i in y:
tmp = []
for j in range(25):
if (int(i/10) == j) :
tmp.append(1)
else :
tmp.append(0)
arr.append(tmp)
return arr
def get_XY_from_image(photo_name:str,color:int,jumps:int=100,show:bool=False):
data = asarray(Image.open(photo_name))
color_arr = data[:,:,color]
image_color_arr = Image.fromarray(color_arr)
if show: image_color_arr.show()
data_x = []
data_y = []
print(f"pic size: {len(color_arr)}x{len(color_arr[0])} name: {photo_name}")
for i in range(1,len(color_arr)-1,jumps):
for j in range(1,len(color_arr[0])-1):
temp_y = [color_arr[i][j]]
temp_x = [color_arr[i-1][j-1],color_arr[i-1][j],color_arr[i][j-1],color_arr[i+1][j],color_arr[i][j+1],color_arr[i+1][j+1],color_arr[i-1][j+1],color_arr[i+1][j-1]]
data_y.append(temp_y)
data_x.append(temp_x)
return (data_x,data_y)
def load_pic_data(pics_array,color:int,jumps:int=100,show:bool=False):
data_x , data_y = get_XY_from_image(pics_array[0],color,jumps,show)
for i in pics_array[1:]:
data_tmp_x , data_tmp_y = get_XY_from_image(i,color,jumps,show)
data_x = np.append(data_x,data_tmp_x,axis=0)
data_y = np.append(data_y,data_tmp_y,axis=0)
data_x = np.array(data_x)
data_y = np.array(data_y)
return data_x,data_y
###Output
_____no_output_____
###Markdown
data
###Code
data_x , data_y = load_pic_data(
["data/cat_test.jpg", "data/balloon.jpg","data/cat.jpg","data/city.jpg",
"data/city_night.jpg","data/city_color.jpg",
"data/flower.jpg"],
color=0,jumps=100)
data_t_x , data_t_y = load_pic_data(["data/park.jpg"],color=1,jumps=100)
# print(data_t_y)
# print(YtoArr(data_t_y))
data_y = YtoArr(data_y)
data_t_y = YtoArr(data_t_y)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
show = 10
loss_in_time = []
w_arr = []
w2_arr = []
test_over_time = []
accuracy_over_time = []
correct_prediction = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
for i in range(0,10000):
sess.run(update, feed_dict = {x:data_x, y:data_y})
if(i%show==0 and i>21):
tmp = sess.run(loss,feed_dict={x:data_x,y:data_y})
loss_in_time.append(tmp)
w_arr.append(sess.run(w1))
w2_arr.append(sess.run(w2))
if(i%(show*1)==0):
print(f"i = {i}, loss = {tmp},")
accuracy_over_time.append(sess.run(accuracy, feed_dict={x:data_t_x,y:data_t_y}))
test_over_time.append(sess.run(loss,feed_dict={x:data_t_x,y:data_t_y}))
d = np.array(np.array(w_arr).transpose()[0]).transpose()
plt.plot(d)
plt.ylabel('w1 over time')
plt.show()
d = np.array(np.array(w2_arr).transpose()[0]).transpose()
plt.plot(d)
plt.ylabel('w2 over time')
plt.show()
plt.plot(loss_in_time,label ="loss")
plt.plot(test_over_time , label ="test")
plt.legend()
plt.ylabel('loss over time')
plt.show()
plt.plot(accuracy_over_time,label ="accuracy over time")
plt.legend()
plt.ylabel('accuracy over time')
plt.show()
picR,y_data = load_pic_data(["data/cat.jpg"],color=0,jumps=1)
picG,y_data = load_pic_data(["data/cat.jpg"],color=1,jumps=1)
picB,y_data = load_pic_data(["data/cat.jpg"],color=2,jumps=1)
pR = sess.run(pred,feed_dict={x:picR})
pG = sess.run(pred,feed_dict={x:picG})
pB = sess.run(pred,feed_dict={x:picB})
# print(p)
def YtoPic(arr):
picture = []
for line in arr:
max_pos = 0
tmp_max = 0
for i,num in enumerate(line):
if(num>tmp_max):
max_pos = i
tmp_max = num
color_tmp = max_pos*10
picture.append(color_tmp)
return picture
pictureR = YtoPic(pR)
pictureG = YtoPic(pG)
pictureB = YtoPic(pB)
size_x = 576
size_y = 1024
# data_R = np.reshape(pictureR,(size_x-2,size_y-2))
# data_G = np.reshape(pictureG,(size_x-2,size_y-2))
# data_B = np.reshape(pictureB,(size_x-2,size_y-2))
arr = np.zeros((size_x-2,size_y-2,3))
arr[:,:,0] = np.reshape(pictureR,(size_x-2,size_y-2))
arr[:,:,1] = np.reshape(pictureG,(size_x-2,size_y-2))
arr[:,:,2] = np.reshape(pictureB,(size_x-2,size_y-2))
img = Image.fromarray(arr.astype('uint8'),mode="RGB")
img.show(title="calculated")
# y_data = YtoArr(y_data)
# y_data = YtoPic(y_data)
# y_data = np.reshape(y_data,(size_x-2,size_y-2))
# data_RGB = np.concatenate((y_data,data_R), axis=1)
# img = Image.fromarray(data_RGB)
# img.show(title="calculated")
###Output
pic size: 576x1024 name: data/cat.jpg
pic size: 576x1024 name: data/cat.jpg
pic size: 576x1024 name: data/cat.jpg
###Markdown
**如下,载入训练数据**
###Code
data_file = open("./mnist_dataset/mnist_train_100.csv",'r')
data_list = data_file.readlines()
data_file.close()
data_list[:1]
###Output
_____no_output_____
###Markdown
** 训练数据打印测试 **
###Code
import matplotlib.pyplot
%matplotlib inline
all_value = data_list[0].split(",")
# asfarray:是numpy数组且是float类型
image_array = np.asfarray(all_value[1:]).reshape((28,28))
matplotlib.pyplot.imshow(image_array, interpolation = 'None', cmap = 'Greys')
###Output
_____no_output_____
###Markdown
** 模型训练和测试 **
###Code
input_nodes = 784
hidden_nodes = 200
output_nodes = 10
learning_rate = 0.2
n = neuralNetwork(input_nodes, hidden_nodes, output_nodes, learning_rate)
# 载入mnist数据集的训练数据
# training_data_file = open("./mnist_dataset/mnist_train_100.csv", 'r')
training_data_file = open("./mnist_dataset/mnist_train.csv", 'r')
training_data_list = training_data_file.readlines()
training_data_file.close()
# 训练模型
epochs = 5
for e in range(epochs):
for record in training_data_list:
all_values = record.split(',')
inputs = (np.asfarray(all_values[1:])/255.0*0.99) + 0.01
targets = np.zeros(output_nodes) + 0.01
targets[int(all_values[0])] = 0.99
n.train(inputs, targets)
pass
# 载入mnist数据集的测试数据
# test_data_file = open("./mnist_dataset/mnist_test_10.csv", 'r')
test_data_file = open("./mnist_dataset/mnist_test.csv", 'r')
test_data_list = test_data_file.readlines()
test_data_file.close()
all_values = test_data_list[0].split(',')
print(all_values)
matplotlib.pyplot.imshow(np.asfarray(all_values[1:]).reshape([28,28]))
# 从已训练的模型中查询该数据的预测值
n.query(np.asfarray(all_values[1:])/255.0*0.99 + 0.01)
# 测试神经网络
scorecard = []
# 遍历测试数据集
for record in test_data_list:
all_values = record.split(',')
correct_label = int(all_values[0])
print("正确的标签是:", correct_label)
inputs = (np.asfarray(all_values[1:])/255.0*0.99 + 0.01)
outputs = n.query(inputs)
label = np.argmax(outputs)
print("预测的标签是:", label)
if(label == correct_label):
scorecard.append(1)
else:
scorecard.append(0)
pass
# print(scorecard)
scorecard_array = np.array(scorecard)
print("预测准确率:", scorecard_array.sum()/scorecard_array.size)
###Output
预测准确率: 0.947
###Markdown
data normalization okbatch normalization okmomentum learning rate oklearning rate decay okweight initialize okdropout okweight regularization okearly stopping okfocal loss okpenalty okweight pruning
###Code
#importing libraries
import pandas as pd
import numpy as np
import torch
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score
from torch.autograd import Variable
import torchvision.transforms as transforms
import torchvision.datasets as dsets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from torch.utils.data import Dataset, DataLoader
from imblearn.combine import SMOTEENN
from imblearn.combine import SMOTETomek
import torch.nn as nn
import torch.nn.functional as F
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
import torchvision
import torchvision.transforms as transforms
from torch.utils.tensorboard import SummaryWriter
dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU
# load dataset
data = pd.read_csv("dataset/preprocessed.csv")
data = data.drop(data[data.target == -1].index)
data.shape
# Separate input features and target
targets = data.target
targets -= 1
targets = targets.to_numpy()
features = data.drop('target', axis=1)
features = features.to_numpy()
# split test part
X_trainAndVal, X_test, y_trainAndVal, y_test = train_test_split(features, targets, test_size = 0.25, random_state = 0)
# split train and validation part
X_train, X_val, y_train, y_val = train_test_split(X_trainAndVal, y_trainAndVal, test_size = 0.25, random_state = 0)
# print distribution before re-sampling
unique_elements, counts_elements = np.unique(y_train, return_counts=True)
print("Frequency of unique values of the said array:")
print(np.asarray((unique_elements, counts_elements)))
# plot distribution before re-sampling
objects = ('dam lev 1', 'dam lev 2', 'dam lev 3', 'dam lev 4', 'dam lev 5')
y_pos = np.arange(len(objects))
plt.bar(y_pos, counts_elements, align='center', alpha=0.5)
plt.xticks(y_pos, objects)
plt.ylabel('number of sample')
plt.title('Imbalanced Data Distribution')
plt.savefig("imbalanced.png")
# re-sampling may use
# sm = SMOTETomek(random_state = 27, n_jobs = -1)
# X_train, y_train = sm.fit_sample(X_train, y_train)
# print distribution after re-sampling
# unique_elements, counts_elements = np.unique(y_train, return_counts=True)
# print("Frequency of unique values of the said array:")
# print(np.asarray((unique_elements, counts_elements)))
# plot distribution after re-sampling
# objects = ('dam lev 1', 'dam lev 2', 'dam lev 3', 'dam lev 4', 'dam lev 5')
# y_pos = np.arange(len(objects))
# plt.bar(y_pos, counts_elements, align='center', alpha=0.5)
# plt.xticks(y_pos, objects)
# plt.ylabel('number of sample')
# plt.title('After Re-sampling Data Distribution')
# plt.savefig("resample.png")
#Scale data
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_val = sc.transform(X_val)
X_test = sc.transform(X_test)
# calculate weight for targets
from sklearn.utils.class_weight import compute_class_weight
class_weights = compute_class_weight('balanced', np.unique(y_train), y_train)
class_weights = torch.from_numpy(class_weights)
class_weights
# network settings
import sys
epsilon = sys.float_info.epsilon
batch_size = 100000
epochs = 100
input_dim = 43
output_dim = 5
lr = 0.05
momentum_val = 0.9
weight_decay_val = 0.00001
gamma_val = 0.5
prob = 0.1
old_loss = 1 / epsilon
cur_loss = 0.0
best_loss = 1 / epsilon
loss_dicrease_count = 0
loss_dicrease_limit = 3
loss_dicrease_threshold = 0.001
early_stop_epoch = 0
# data load
class datasetLoad(Dataset):
def __init__(self, features,labels):
self.features = features
self.labels = labels
def __len__(self):
return len(self.features)
def __getitem__(self, index):
return self.features[index], self.labels[index]
X_train = datasetLoad(X_train, y_train)
X_val = datasetLoad(X_val, y_val)
# dataLoader
train_loader = torch.utils.data.DataLoader(dataset = X_train, batch_size = batch_size, shuffle=True, num_workers = 1)
val_loader = torch.utils.data.DataLoader(dataset = X_val, batch_size = batch_size, shuffle=True, num_workers = 1)
# focal loss
class FocalLoss(nn.Module):
def __init__(self, focusing_param = 2, balance_param=0.5):
super(FocalLoss, self).__init__()
self.focusing_param = focusing_param
self.balance_param = balance_param
def forward(self, output, target):
cross_entropy = F.cross_entropy(output, target)
cross_entropy_log = torch.log(cross_entropy)
logpt = - F.cross_entropy(output, target)
pt = torch.exp(logpt)
focal_loss = -((1 - pt) ** self.focusing_param) * logpt
balanced_focal_loss = self.balance_param * focal_loss
return balanced_focal_loss
# network
class neuralNetwork(torch.nn.Module):
def __init__(self, input_dim, hidden1_dim, hidden2_dim, output_dim, dropout_p):
super(neuralNetwork, self).__init__()
self.hidden1 = nn.Linear(input_dim, hidden1_dim, bias=True)
torch.nn.init.xavier_uniform(self.hidden1.weight)
self.bnhidden1 = nn.BatchNorm1d(hidden1_dim)
self.hidden2 = nn.Linear(hidden1_dim, hidden2_dim, bias=True)
torch.nn.init.xavier_uniform(self.hidden2.weight)
self.bnhidden2 = nn.BatchNorm1d(hidden2_dim)
self.output = nn.Linear(hidden2_dim, output_dim, bias=True)
torch.nn.init.xavier_uniform(self.output.weight)
self.dropout = nn.Dropout(dropout_p)
def forward(self, x):
x = self.hidden1(x)
x = self.dropout(x)
x = self.bnhidden1(x)
x = F.leaky_relu_(x, negative_slope=0.01)
x = self.hidden2(x)
x = self.dropout(x)
x = self.bnhidden2(x)
x = F.leaky_relu_(x, negative_slope=0.01)
outputs = self.output(x)
return outputs
# create network object
model = neuralNetwork(input_dim, 20, 10, output_dim, prob)
# choose loss function
# criterion = torch.nn.CrossEntropyLoss()
# criterion = torch.nn.CrossEntropyLoss(weight = class_weights.float())
criterion = FocalLoss()
# choose optimizer and learning rate decay
from torch.optim.lr_scheduler import MultiStepLR
# optimizer = torch.optim.SGD(model.parameters(), lr = lr, momentum = momentum_val, weight_decay = weight_decay_val)
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
scheduler = MultiStepLR(optimizer, milestones=[30, 60, 80], gamma = gamma_val)
# convert network to CUDA
if torch.cuda.is_available():
model = model.cuda()
criterion = criterion.cuda()
feature_train, label_train = next(iter(train_loader))
if torch.cuda.is_available():
feature_train = feature_train.cuda()
grid_train = torchvision.utils.make_grid(feature_train)
# create Tensorboard object
tb = SummaryWriter('runs')
# Upload network and example to Tensorboard
tb.add_image("features", grid_train)
tb.add_graph(model, feature_train.float())
# apply neural network
import datetime
a = datetime.datetime.now().replace(microsecond=0)
train_loss = []
validation_loss = []
for epoch in range(epochs):
train_loss_val = 0.0
train_counter = 0
validation_loss_val = 0.0
val_counter = 0
accuracy = 0.0
for i, (features_train, labels_train) in enumerate(train_loader):
features_train = Variable(features_train)
labels_train = Variable(labels_train)
if torch.cuda.is_available():
features_train = features_train.cuda()
labels_train = labels_train.cuda()
optimizer.zero_grad()
outputs_train = model(features_train.float())
loss_train = criterion(outputs_train.float(), labels_train)
loss_train.backward()
optimizer.step()
train_loss_val += loss_train.item()
train_counter += 1
del features_train
del labels_train
torch.cuda.empty_cache()
train_loss_val /= train_counter
for i, (features_val, labels_val) in enumerate( val_loader):
features_val = Variable(features_val)
labels_val = Variable(labels_val)
if torch.cuda.is_available():
features_val = features_val.cuda()
labels_val = labels_val.cuda()
with torch.no_grad():
outputs_val = model(features_val.float())
loss_val = criterion(outputs_val.float(), labels_val)
validation_loss_val += loss_val.item()
_, predicted = torch.max(outputs_val.data, 1)
# for gpu, bring the predicted and labels back to cpu fro python operations to work
accuracy += f1_score(labels_val.cpu(), predicted.cpu(), average = 'weighted') * 100
val_counter += 1
del features_val
del labels_val
torch.cuda.empty_cache()
validation_loss_val /= val_counter
accuracy /= val_counter
cur_loss = validation_loss_val
if(cur_loss < best_loss):
torch.save(model.state_dict(), 'weights_only.pth')
early_stop_epoch = epoch
best_loss = cur_loss
if(cur_loss > old_loss + loss_dicrease_threshold):
loss_dicrease_count += 1
if(cur_loss + loss_dicrease_threshold < old_loss):
loss_dicrease_count = 0
if(loss_dicrease_count == loss_dicrease_limit):
print("--------------------\n\n\nYOU NEED STOP\n\n\n\n----------")
break
old_loss = cur_loss
scheduler.step()
train_loss.append(train_loss_val)
validation_loss.append(validation_loss_val)
if(epoch % 20 == 0):
print("{")
print("Epoch: {}. Train Loss: {}. ".format(epoch, train_loss_val))
print("Epoch: {}. Validation Loss: {}. Validation Accuracy: {}.".format(epoch, validation_loss_val, accuracy))
print("}")
tb.add_scalar("Train Loss ", train_loss_val, epoch)
tb.add_scalar("Validation Loss ", validation_loss_val, epoch)
tb.add_scalar("Validation Accur ", accuracy, epoch)
for name, weight in model.named_parameters():
tb.add_histogram(name, weight, epoch)
tb.add_histogram(f'{name}.grad', weight.grad, epoch)
tb.close()
# calculate time difference and give warning when the epochs finish
b = datetime.datetime.now().replace(microsecond=0)
print(b-a)
import os,time
counter = 0
while(counter < 1):
os.system('spd-say "your program has finished"')
time.sleep(3)
counter += 1
# plotting the training and validation loss
plt.plot(train_loss, label='Training loss')
plt.plot(validation_loss, label='Validation loss')
x = np.full([2], early_stop_epoch, dtype = int)
y = np.linspace(min(train_loss), max(validation_loss), 2)
plt.plot(x, y, '-r', label='Early stopping Line')
plt.title('Train and Validation Loss')
plt.xlabel('Epoch', color='#1C2833')
plt.ylabel('Validation Loss', color='#1C2833')
plt.legend(loc='upper left')
plt.legend()
plt.grid()
# plt.show()
plt.savefig("loss.png")
# print the early stopping epoch and create a new network
print(early_stop_epoch)
the_model = neuralNetwork(input_dim, 20, 10, output_dim, prob)
if torch.cuda.is_available():
the_model = the_model.cuda()
# load best weight to new network
the_model.load_state_dict(torch.load("weights_only.pth"))
X_test = torch.from_numpy(X_test)
y_test = torch.from_numpy(y_test)
if torch.cuda.is_available():
X_test = X_test.cuda()
y_test = y_test.cuda()
# calculate test results
with torch.no_grad():
outputs = the_model(X_test.float())
_, predicted = torch.max(outputs.data, 1)
# print results
print("Accuracy: \t", accuracy_score(y_test.cpu(), predicted.cpu()))
print("F1 Score: \t", f1_score(y_test.cpu(), predicted.cpu(), average = 'weighted'))
print("Precision:\t", precision_score(y_test.cpu(), predicted.cpu(), average = 'weighted'))
print("Recall: \t", recall_score(y_test.cpu(), predicted.cpu(), average = 'weighted'))
# show confusion matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test.cpu(), predicted.cpu())
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
plt.imshow(cm, interpolation='nearest',cmap="RdYlGn")
plt.title("Confusion Matrix")
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
for i in range(5):
for j in range(5):
plt.text(j,i,format(cm[i][j],".2f"),horizontalalignment="center",color="black")
plt.tight_layout()
# plt.show()
plt.savefig("confusion.png")
###Output
_____no_output_____ |
tutorials/4 Tutorial for fast_ml - Outlier Analysis & Treatment.ipynb | ###Markdown
Tutorial for using the package `fast-ml` This package is as good as having a junior Data Scientist working for you. Most of the commonly used EDA steps, Missing Data Imputation techniques, Feature Engineering steps are covered in a ready to use format Part 4. Outlier Analysis and Treatment 1. Import eda module from the package `from fast_ml.missing_data_imputation import MissingDataImputer_Categorical, MissingDataImputer_Numerical` 2. Define the imputer object. * For Categorical variables use `MissingDataImputer_Categorical`* For Numerical variables use `MissingDataImputer_Numerical``cat_imputer = MissingDataImputer_Categorical(method = 'frequent')` 3. Fit the object on your dataframe and provide a list of variables`cat_imputer.fit(train, variables = ['BsmtQual'])` 4. Apply the transform method on train / test dataset`train = cat_imputer.transform(train)`&`test = cat_imputer.transform(test)` 5. parameter dictionary gets created which store the values used for imputation. It can be viewed as`cat_imputer.param_dict_`
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from fast_ml.missing_data_imputation import MissingDataImputer_Categorical, MissingDataImputer_Numerical
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
df = pd.read_csv('train.csv')
df.shape
df.head(5)
numeric_type = ['float64', 'int64']
category_type = ['object']
###Output
_____no_output_____
###Markdown
Start Missing Data Imputation Numerical Variables 1. LotFrontage
###Code
# Use the following method for a numerical variable
eda_obj.eda_numerical_variable('MSSubClass')
###Output
_____no_output_____
###Markdown
2. GarageYrBlt
###Code
# Use the following method for a numerical variable
eda_obj.eda_numerical_variable('LotFrontage')
###Output
_____no_output_____
###Markdown
Categorical Variables 1. BsmtQual 2. FireplaceQu
###Code
def eda_num_vars_outlier_detect (df, variables, tol = 1.5):
col=3
row = int(np.ceil(len(variables)/3))
fig = plt.figure(figsize=(6*col,5*row))
for i, var in enumerate(variables):
quartile_range = iqr(df[var][df[c].notnull()])
quartile_1, quartile_3 = np.percentile(df[var][df[var].notnull()], [25,75])
lower_bound = quartile_1 - quartile_range*1.5
upper_bound = quartile_3 + quartile_range*1.5
ax = fig.add_subplot(row,col,i+1)
sns.boxplot(x=df[c], orient='h')
ax.set_title(f'Lower IQR Bound: {round(lower_bound,2)} | Upper IQR Bound: {round(upper_bound,2)}')
plt.show()
###Output
_____no_output_____
###Markdown
Tutorial for using the package `fast-ml` This package is as good as having a junior Data Scientist working for you. Most of the commonly used EDA steps, Missing Data Imputation techniques, Feature Engineering steps are covered in a ready to use format Part 4. Outlier Analysis and Treatment 1. Import outlier_treatment module from the package fast_ml`from fast_ml.outlier_treatment import check_outliers, OutlierTreatment` 2. Check for outliers in the entire dataset`outlier_df = check_outliers(train)``outlier_df` 3. Define the outlier object. `outlier_obj = OutlierTreatment(method = 'iqr', tol=1.5)` 3. Fit the object on your dataframe and provide a list of variables`outlier_obj.fit(train, ['MSSubClass'])` 4. Apply the transform method on train / test dataset`train = outlier_obj.transform(train)`&`test = outlier_obj.transform(test)` 5. parameter dictionary gets created which store the values used for outlier treatment. It can be viewed as`outlier_obj.param_dict_`
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from fast_ml.outlier_treatment import check_outliers, OutlierTreatment
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
df1 = pd.read_csv('../data/titanic.csv')
df1.shape
df2 = pd.read_csv('../data/house_prices.csv')
df2.shape
df1.head(5)
df2.head()
numeric_type = ['float64', 'int64']
category_type = ['object']
###Output
_____no_output_____
###Markdown
Start Outlier Treatment A. Check Outliers
###Code
check_outliers(df1)
check_outliers(df2)
###Output
Index(['Id', 'MSSubClass', 'LotFrontage', 'LotArea', 'OverallQual',
'OverallCond', 'YearBuilt', 'YearRemodAdd', 'MasVnrArea', 'BsmtFinSF1',
'BsmtFinSF2', 'BsmtUnfSF', 'TotalBsmtSF', '1stFlrSF', '2ndFlrSF',
'LowQualFinSF', 'GrLivArea', 'BsmtFullBath', 'BsmtHalfBath', 'FullBath',
'HalfBath', 'BedroomAbvGr', 'KitchenAbvGr', 'TotRmsAbvGrd',
'Fireplaces', 'GarageYrBlt', 'GarageCars', 'GarageArea', 'WoodDeckSF',
'OpenPorchSF', 'EnclosedPorch', '3SsnPorch', 'ScreenPorch', 'PoolArea',
'MiscVal', 'MoSold', 'YrSold', 'SalePrice'],
dtype='object')
###Markdown
B. Outlier Treatment 1. MSSubClass
###Code
# Before Outlier Treatment
sns.boxplot(y = 'MSSubClass', data = df, color ='royalblue')
plt.show()
outlier_obj = OutlierTreatment(method = 'iqr', tol=1.5)
outlier_obj.fit(df, ['MSSubClass'])
outlier_obj.param_dict_
df = outlier_obj.transform(df)
# After Outlier Treatment
sns.boxplot(y = 'MSSubClass', data = df, color ='royalblue')
plt.show()
outlier_obj2 = OutlierTreatment(method = 'iqr', tol=1.5)
outlier_obj2.fit(df)
outlier_obj2.param_dict_
###Output
_____no_output_____ |
Prediction and Control with Function Approximation/Week 3/Notebook_ Function Approximation and Control/Assignment3-v3.ipynb | ###Markdown
Assignment 3: Function Approximation and Control Welcome to Assignment 3. In this notebook you will learn how to:- Use function approximation in the control setting- Implement the Sarsa algorithm using tile coding- Compare three settings for tile coding to see their effect on our agentAs with the rest of the notebooks do not import additional libraries or adjust grading cells as this will break the grader.MAKE SURE TO RUN ALL OF THE CELLS SO THE GRADER GETS THE OUTPUT IT NEEDS
###Code
# Import Necessary Libraries
import numpy as np
import matplotlib.pyplot as plt
import tiles3 as tc
from rl_glue import RLGlue
from agent import BaseAgent
from utils import argmax
import mountaincar_env
import time
###Output
_____no_output_____
###Markdown
In the above cell, we import the libraries we need for this assignment. You may have noticed that we import mountaincar_env. This is the __Mountain Car Task__ introduced in [Section 10.1 of the textbook](http://www.incompleteideas.net/book/RLbook2018.pdfpage=267). The task is for an under powered car to make it to the top of a hill:The car is under-powered so the agent needs to learn to rock back and forth to get enough momentum to reach the goal. At each time step the agent receives from the environment its current velocity (a float between -0.07 and 0.07), and it's current position (a float between -1.2 and 0.5). Because our state is continuous there are a potentially infinite number of states that our agent could be in. We need a function approximation method to help the agent deal with this. In this notebook we will use tile coding. We provide a tile coding implementation for you to use, imported above with tiles3. Section 0: Tile Coding Helper Function To begin we are going to build a tile coding class for our Sarsa agent that will make it easier to make calls to our tile coder. Tile Coding Function Tile coding is introduced in [Section 9.5.4 of the textbook](http://www.incompleteideas.net/book/RLbook2018.pdfpage=239) of the textbook as a way to create features that can both provide good generalization and discrimination. It consists of multiple overlapping tilings, where each tiling is a partitioning of the space into tiles. To help keep our agent code clean we are going to make a function specific for tile coding for our Mountain Car environment. To help we are going to use the Tiles3 library. This is a Python 3 implementation of the tile coder. To start take a look at the documentation: [Tiles3 documentation](http://incompleteideas.net/tiles/tiles3.html)To get the tile coder working we need to implement a few pieces:- First: create an index hash table - this is done for you in the init function using tc.IHT.- Second is to scale the inputs for the tile coder based on the number of tiles and the range of values each input could take. The tile coder needs to take in a number in range [0, 1], or scaled to be [0, 1] * num_tiles. For more on this refer to the [Tiles3 documentation](http://incompleteideas.net/tiles/tiles3.html).- Finally we call tc.tiles to get the active tiles back.
###Code
# Tile Coding Function [Graded]
class MountainCarTileCoder:
def __init__(self, iht_size=4096, num_tilings=8, num_tiles=8):
"""
Initializes the MountainCar Tile Coder
Initializers:
iht_size -- int, the size of the index hash table, typically a power of 2
num_tilings -- int, the number of tilings
num_tiles -- int, the number of tiles. Here both the width and height of the
tile coder are the same
Class Variables:
self.iht -- tc.IHT, the index hash table that the tile coder will use
self.num_tilings -- int, the number of tilings the tile coder will use
self.num_tiles -- int, the number of tiles the tile coder will use
"""
self.iht = tc.IHT(iht_size)
self.num_tilings = num_tilings
self.num_tiles = num_tiles
def get_tiles(self, position, velocity):
"""
Takes in a position and velocity from the mountaincar environment
and returns a numpy array of active tiles.
Arguments:
position -- float, the position of the agent between -1.2 and 0.5
velocity -- float, the velocity of the agent between -0.07 and 0.07
returns:
tiles - np.array, active tiles
"""
# Set the max and min of position and velocity to scale the input
# POSITION_MIN
# POSITION_MAX
# VELOCITY_MIN
# VELOCITY_MAX
### START CODE HERE ###
POSITION_MIN = -1.2
POSITION_MAX = 0.5
VELOCITY_MIN = -0.07
VELOCITY_MAX = 0.07
### END CODE HERE ###
# Use the ranges above and self.num_tiles to set position_scale and velocity_scale
# position_scale = number of tiles / position range
# velocity_scale = number of tiles / velocity range
# Scale position and velocity by multiplying the inputs of each by their scale
### START CODE HERE ###
position_scale = self.num_tiles/(POSITION_MAX - POSITION_MIN)
velocity_scale = self.num_tiles /(VELOCITY_MAX - VELOCITY_MIN)
### END CODE HERE ###
# get the tiles using tc.tiles, with self.iht, self.num_tilings and [scaled position, scaled velocity]
# nothing to implment here
tiles = tc.tiles(self.iht, self.num_tilings, [position * position_scale,
velocity * velocity_scale])
return np.array(tiles)
# [DO NOT CHANGE]
tests = [[-1.0, 0.01], [0.1, -0.01], [0.2, -0.05], [-1.0, 0.011], [0.2, -0.05]]
mctc = MountainCarTileCoder(iht_size=1024, num_tilings=8, num_tiles=8)
t = []
for test in tests:
position, velocity = test
tiles = mctc.get_tiles(position=position, velocity=velocity)
t.append(tiles)
print("Your results:")
for tiles in t:
print(tiles)
print()
print("Expected results:")
expected = """[0 1 2 3 4 5 6 7]
[ 8 9 10 11 12 13 14 15]
[16 17 18 19 20 21 22 23]
[ 0 24 2 3 4 5 6 7]
[16 17 18 19 20 21 22 23]
"""
print(expected)
np.random.seed(1)
mctc_test = MountainCarTileCoder(iht_size=1024, num_tilings=8, num_tiles=8)
test = [mctc_test.get_tiles(np.random.uniform(-1.2, 0.5), np.random.uniform(-0.07, 0.07)) for _ in range(10)]
np.save("tiles_test", test)
###Output
Your results:
[0 1 2 3 4 5 6 7]
[ 8 9 10 11 12 13 14 15]
[16 17 18 19 20 21 22 23]
[ 0 24 2 3 4 5 6 7]
[16 17 18 19 20 21 22 23]
Expected results:
[0 1 2 3 4 5 6 7]
[ 8 9 10 11 12 13 14 15]
[16 17 18 19 20 21 22 23]
[ 0 24 2 3 4 5 6 7]
[16 17 18 19 20 21 22 23]
###Markdown
Section 1: Sarsa Agent We are now going to use the functions that we just created to implement the Sarsa algorithm. Recall from class that Sarsa stands for State, Action, Reward, State, Action.For this case we have given you an argmax function similar to what you wrote back in Course 1 Assignment 1. Recall, this is different than the argmax function that is used by numpy, which returns the first index of a maximum value. We want our argmax function to arbitrarily break ties, which is what the imported argmax function does. The given argmax function takes in an array of values and returns an int of the chosen action: argmax(action values)There are multiple ways that we can deal with actions for the tile coder. Here we are going to use one simple method - make the size of the weight vector equal to (iht_size, num_actions). This will give us one weight vector for each action and one weight for each tile.Use the above function to help fill in select_action, agent_start, agent_step, and agent_end.Hints:1) The tile coder returns a list of active indexes (e.g. [1, 12, 22]). You can index a numpy array using an array of values - this will return an array of the values at each of those indices. So in order to get the value of a state we can index our weight vector using the action and the array of tiles that the tile coder returns:```self.w[action][active_tiles]```This will give us an array of values, one for each active tile, and we sum the result to get the value of that state-action pair.2) In the case of a binary feature vector (such as the tile coder), the derivative is 1 at each of the active tiles, and zero otherwise.
###Code
# SARSA
class SarsaAgent(BaseAgent):
"""
Initialization of Sarsa Agent. All values are set to None so they can
be initialized in the agent_init method.
"""
def __init__(self):
self.last_action = None
self.last_state = None
self.epsilon = None
self.gamma = None
self.iht_size = None
self.w = None
self.alpha = None
self.num_tilings = None
self.num_tiles = None
self.mctc = None
self.initial_weights = None
self.num_actions = None
self.previous_tiles = None
def agent_init(self, agent_info={}):
"""Setup for the agent called when the experiment first starts."""
self.num_tilings = agent_info.get("num_tilings", 8)
self.num_tiles = agent_info.get("num_tiles", 8)
self.iht_size = agent_info.get("iht_size", 4096)
self.epsilon = agent_info.get("epsilon", 0.0)
self.gamma = agent_info.get("gamma", 1.0)
self.alpha = agent_info.get("alpha", 0.5) / self.num_tilings
self.initial_weights = agent_info.get("initial_weights", 0.0)
self.num_actions = agent_info.get("num_actions", 3)
# We initialize self.w to three times the iht_size. Recall this is because
# we need to have one set of weights for each action.
self.w = np.ones((self.num_actions, self.iht_size)) * self.initial_weights
# We initialize self.mctc to the mountaincar verions of the
# tile coder that we created
self.tc = MountainCarTileCoder(iht_size=self.iht_size,
num_tilings=self.num_tilings,
num_tiles=self.num_tiles)
def select_action(self, tiles):
"""
Selects an action using epsilon greedy
Args:
tiles - np.array, an array of active tiles
Returns:
(chosen_action, action_value) - (int, float), tuple of the chosen action
and it's value
"""
action_values = []
chosen_action = None
# First loop through the weights of each action and populate action_values
# with the action value for each action and tiles instance
# Use np.random.random to decide if an exploritory action should be taken
# and set chosen_action to a random action if it is
# Otherwise choose the greedy action using the given argmax
# function and the action values (don't use numpy's armax)
### START CODE HERE ###
action_values = np.sum(self.w[:,tiles], axis = 1)
if np.random.random() < self.epsilon:
chosen_action = np.random.randint(self.num_actions)
else:
chosen_action = argmax(action_values)
### END CODE HERE ###
return chosen_action, action_values[chosen_action]
def agent_start(self, state):
"""The first method called when the experiment starts, called after
the environment starts.
Args:
state (Numpy array): the state observation from the
environment's evn_start function.
Returns:
The first action the agent takes.
"""
position, velocity = state
# Use self.tc to set active_tiles using position and velocity
# set current_action to the epsilon greedy chosen action using
# the select_action function above with the active tiles
### START CODE HERE ###
active_tiles = self.tc.get_tiles(position, velocity)
current_action, action_value = self.select_action(active_tiles)
### END CODE HERE ###
self.last_action = current_action
self.previous_tiles = np.copy(active_tiles)
return self.last_action
def agent_step(self, reward, state):
"""A step taken by the agent.
Args:
reward (float): the reward received for taking the last action taken
state (Numpy array): the state observation from the
environment's step based, where the agent ended up after the
last step
Returns:
The action the agent is taking.
"""
# choose the action here
position, velocity = state
# Use self.tc to set active_tiles using position and velocity
# set current_action and action_value to the epsilon greedy chosen action using
# the select_action function above with the active tiles
# Update self.w at self.previous_tiles and self.previous action
# using the reward, action_value, self.gamma, self.w,
# self.alpha, and the Sarsa update from the textbook
### START CODE HERE ###
active_tiles = self.tc.get_tiles(position, velocity)
current_action, action_value = self.select_action(active_tiles)
previous_value = np.sum(self.w[self.last_action][self.previous_tiles])
self.w[self.last_action][self.previous_tiles] += self.alpha * (reward + self.gamma * action_value - previous_value)
### END CODE HERE ###
self.last_action = current_action
self.previous_tiles = np.copy(active_tiles)
return self.last_action
def agent_end(self, reward):
"""Run when the agent terminates.
Args:
reward (float): the reward the agent received for entering the
terminal state.
"""
# Update self.w at self.previous_tiles and self.previous action
# using the reward, self.gamma, self.w,
# self.alpha, and the Sarsa update from the textbook
# Hint - there is no action_value used here because this is the end
# of the episode.
### START CODE HERE ###
previous_value = np.sum(self.w[self.last_action][self.previous_tiles])
self.w[self.last_action][self.previous_tiles] += self.alpha * (reward - previous_value)
### END CODE HERE ###
def agent_cleanup(self):
"""Cleanup done after the agent ends."""
pass
def agent_message(self, message):
"""A function used to pass information from the agent to the experiment.
Args:
message: The message passed to the agent.
Returns:
The response (or answer) to the message.
"""
pass
# Test Epsilon Greedy Function [DO NOT CHANGE]
agent = SarsaAgent()
agent.agent_init({"epsilon": 0.1})
agent.w = np.array([np.array([1, 2, 3]), np.array([4, 5, 6]), np.array([7, 8, 9])])
total = 0
for i in range(1000):
chosen_action, action_value = agent.select_action(np.array([0,1]))
total += action_value
print(total)
assert total < 15000, "Check that you are not always choosing the best action"
np.save("epsilon_test", total)
agent = SarsaAgent()
agent.agent_init({"epsilon": 0.0})
agent.w = np.array([np.array([1, 2, 3]), np.array([4, 5, 6]), np.array([7, 8, 9])])
chosen_action, action_value = agent.select_action(np.array([0,1]))
print("Expected value")
print("(2, 15)")
print("Your value")
print((chosen_action, action_value))
np.save("egreedy_test", (chosen_action, action_value))
# Test Sarsa Agent [DO NOT CHANGE]
num_runs = 10
num_episodes = 50
env_info = {"num_tiles": 8, "num_tilings": 8}
agent_info = {}
all_steps = []
agent = SarsaAgent
env = mountaincar_env.Environment
start = time.time()
for run in range(num_runs):
if run % 5 == 0:
print("RUN: {}".format(run))
rl_glue = RLGlue(env, agent)
rl_glue.rl_init(agent_info, env_info)
steps_per_episode = []
for episode in range(num_episodes):
rl_glue.rl_episode(15000)
steps_per_episode.append(rl_glue.num_steps)
all_steps.append(np.array(steps_per_episode))
print("Run time: {}".format(time.time() - start))
plt.plot(np.mean(np.array(all_steps), axis=0))
np.save("sarsa_test", np.array(all_steps))
###Output
RUN: 0
RUN: 5
Run time: 13.998748302459717
###Markdown
The learning rate of your agent should look similar to ours, though it will not look exactly the same.If there are some spikey points that is okay. Due to stochasticity, a few episodes may have taken much longer, causing some spikes in the plot. The trend of the line should be similar, though, generally decreasing to about 200 steps per run. This result was using 8 tilings with 8x8 tiles on each. Let's see if we can do better, and what different tilings look like. We will also text 2 tilings of 16x16 and 4 tilings of 32x32. These three choices produce the same number of features (512), but distributed quite differently.
###Code
# Compare the three
num_runs = 20
num_episodes = 100
env_info = {}
agent_runs = []
# alphas = [0.2, 0.4, 0.5, 1.0]
alphas = [0.5]
agent_info_options = [{"num_tiles": 16, "num_tilings": 2, "alpha": 0.5},
{"num_tiles": 4, "num_tilings": 32, "alpha": 0.5},
{"num_tiles": 8, "num_tilings": 8, "alpha": 0.5}]
agent_info_options = [{"num_tiles" : agent["num_tiles"],
"num_tilings": agent["num_tilings"],
"alpha" : alpha} for agent in agent_info_options for alpha in alphas]
agent = SarsaAgent
env = mountaincar_env.Environment
for agent_info in agent_info_options:
all_steps = []
start = time.time()
for run in range(num_runs):
if run % 5 == 0:
print("RUN: {}".format(run))
env = mountaincar_env.Environment
rl_glue = RLGlue(env, agent)
rl_glue.rl_init(agent_info, env_info)
steps_per_episode = []
for episode in range(num_episodes):
rl_glue.rl_episode(15000)
steps_per_episode.append(rl_glue.num_steps)
all_steps.append(np.array(steps_per_episode))
agent_runs.append(np.mean(np.array(all_steps), axis=0))
print(rl_glue.agent.alpha)
print("Run Time: {}".format(time.time() - start))
plt.figure(figsize=(15, 10), dpi= 80, facecolor='w', edgecolor='k')
plt.plot(np.array(agent_runs).T)
plt.xlabel("Episode")
plt.ylabel("Steps Per Episode")
plt.yscale("linear")
plt.ylim(0, 1000)
plt.legend(["num_tiles: {}, num_tilings: {}, alpha: {}".format(agent_info["num_tiles"],
agent_info["num_tilings"],
agent_info["alpha"])
for agent_info in agent_info_options])
###Output
RUN: 0
RUN: 5
RUN: 10
RUN: 15
0.25
Run Time: 69.24917554855347
RUN: 0
RUN: 5
RUN: 10
RUN: 15
0.015625
Run Time: 37.583441495895386
RUN: 0
RUN: 5
RUN: 10
RUN: 15
0.0625
Run Time: 39.95271873474121
|
notebooks_completos/000-Bienvenido.ipynb | ###Markdown
Bienvenido al curso de AeroPython El objetivo de este curso es __iniciarte desde cero en la programación en Python y aprender distintas aplicaciones de este lenguaje en la ingeniería.____Nuestra herramienta fundamental de trabajo es el notebook de Jupyter__, podrás conocer más acerca de él en las siguientes clases. Durante el curso te familiarizarás con él y aprenderás a manejarlo (este documento ha sido generado a partir de un notebook).En esta sesión inicial, veremos los pasos a seguir para que __instales Python, descargues el material y puedas empezar a aprender a tu ritmo.__ Recuerda, que todo el material del curso se encuentra disponible en [nuestro repositorio](https://github.com/AeroPython/Curso_AeroPython)._¡Manos a la obra!_ Pasos a seguir: 1. Descarga de Python. La instalación de Python, el Notebook y todos los paquetes que utilizaremos, por separado puede ser una tarea ardua y agotadora, pero no te preocupes: ¡alguien ha hecho ya el trabajo duro!__[Anaconda](https://continuum.io/anaconda/) es una distribución de Python que recopila muchas de las bibliotecas necesarias en el ámbito de la computación científica__ y desde luego, todas las que necesitaremos en este curso. Además __incluye herramientas para programar en Python, como el [Notebook](https://ipython.org/index.html) y [Spyder](https://code.google.com/p/spyderlib/)__ (un IDE al estilo de MATLAB). Lo único que necesitas hacer es:* Ir a la [página de descargas de Anaconda](http://continuum.io/downloads).* Seleccionar tu sistema operativo (Windows, OSX, Linux).* Descargar Anaconda (utilizaremos Python 3.X). 2. Instalación de Python. Consulta las __[instrucciones de instalación](http://docs.continuum.io/anaconda/install.html)__ de Anaconda para tu sistema operativo. En el caso de Windows y OS X, te encontrarás con los típicos instaladores gráficos a los que ya estás acostumbrado. Si te encuentras en Linux, deberás ejectuar el script de instalación desde la consola de comandos, así que recuerda comprobar que tienes bash instalado y asignar permisos de ejecución al script.En caso de que tengas cualquier caso de duda durante el proceso, recuerda que __¡los buscadores de internet son tus mejores amigos!__¡Muy bien! Ya tienes instalado ¿pero dónde?* En __Windows__, desde `Inicio > Anaconda` verás una serie de herramientas de las que ahora dispones ¡no tengas miedo de abrirlas! * En __OS X__, podrás acceder a un launcher con las mismas herramientas desde la carpeta `anaconda` dentro de tu carpeta personal. * En __Linux__, debido al gran número de combinaciones de distribuciones más escritorios no tendrás esos accesos directos gráficos (lo que no quiere decir que no puedas crearlos tú a posteriori) pero, como comprobarás, no hacen ninguna falta y no forman parte de nuestra forma de trabajar en el curso.Ahora, vamos a __actualizar Anaconda__ para asegurarnos de que tenemos nuestra distribución de Python con todos sus paquetes al día para lo que abrimos una __ventana de comandos__ (símbolo de sistema en Windows o terminal en OS X) y ejecutamos los siguientes comandos de actualización (confirmando en el caso de tener que instalar paquetes nuevos):```conda update anacondaconda update --all```Si experimentas cualquier clase de problema durante este proceso, [desinstala tu distribución de Anaconda](http://docs.continuum.io/anaconda/install.html) y vuelve a instalarla donde puedas asegurarte de tener una conexión a internet estable.Ya tenemos nuestra distribución de Python con todos los paquetes que necesitemos (y prácticamente todos los que en un futuro podamos necesitar). _¡A trabajar!_ 3. Descarga del material del curso. El material del curso está disponible en __GitHub, una plataforma para alojar proyectos de software que también proporciona una serie de herramientas para el trabajo en equipo__. Digamos que es una especie de red social-herramienta para escribir y compartir código. (No te preocupes, no necesitarás saber nada sobre ella para seguir el curso). Simplemente ve a nuestro [repositorio del curso en GitHub](https://github.com/AeroPython/curso_caminos-2016), y en la parte derecha encontrarás un botón __*Clone or download*__ como éste:  Púlsalo, selecciona __*Download Zip*__, __guarda el archivo__ en tu ordenador y __descomprímelo__. 4. Utilización del material del curso. Una vez que instalado Python y descargado el material del curso, para poder utilizarlo debes __abrir una línea de comandos en la carpeta que has descomprimido__.* En __Windows__, puedes hacer esto desde el explorador. Primero navega hasta la carpeta y luego usa `shift + clic-derecho` en un espacio vacío de la carpeta y pulsa sobre `Abrir ventana de comandos aquí`:* En __OS X__, puedes activar el menú [nuevo terminal en carpeta](http://appleadicto.com/mac/osx/ejecutar-el-terminal-de-os-x-desde-una-carpeta-del-finder/):* En __Linux__, la totalidad de escritorios disponibles tienen una opción para lanzar un terminal en una determinada carpeta (por ejemplo, el plugin `nautilus-open-terminal` en GNOME o pulsando `F4` dentro de Dolphin en KDE). Se abrirá una línea de comandos, teclea en ella:`jupyter notebook` y pulsa Intro.__¡Es importante que la dirección que aparezca en la línea de comandos sea la correspondiente a la carpeta del curso (e.g. "curso_caminos-2016-master"), o determinados elementos como las imágenes incrustadas no se visualizarán correctamente!__Aparecerán unas cuantas líneas y __se abrirá tu navegador web predefinido. __No hace falta disponer de conexión a Internet__. Lo que está ocurriendo es que *"tu navegador está mostrando lo que le manda el programa que se está ejecutando desde la línea de comandos"*__ (entiéndelo así ya tendrás tiempo de profundizar si quieres). __Así que no cierres la línea de comandos hasta que termines de usar el notebook y ya lo hayas guardado y cerrado en tu navegador.__ En esa ventana de tu navegador puedes moverte por las carpetas y ver los archivos con extensión `.ipynb`. __Ve a la carpeta `Notebooks` y abre la primera clase haciendo click sobre ella.__ Para cambiar el estilo (letra, colores...) ve a `File > Trust Notebook`.En esa primera clase se hace una pequeña introducción a Python. __Lee el principio con calma__ para saber cómo manejar el Notebook (también puedes usar la ayuda `Help > User Interface Tour` ) y __no tengas miedo de tocar y cambiar cosas a tu antojo__. No vas a romper tu ordenador y en una de malas, siempre puedes volverte a descargar todo de GitHub. ¡Ya estás listo para empezar! ---Clase en vídeo, parte del [Curso de Python para científicos e ingenieros](http://cacheme.org/curso-online-python-cientifico-ingenieros/) grabado en la Escuela Politécnica Superior de la Universidad de Alicante.
###Code
from IPython.display import YouTubeVideo
YouTubeVideo("x4xegDME5C0", width=560, height=315, list="PLGBbVX_WvN7as_DnOGcpkSsUyXB1G_wqb")
###Output
_____no_output_____
###Markdown
--- ¡Síguenos en Twitter! Follow @AeroPython !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs'); Este notebook ha sido realizado por: Juan Luis Cano, Mabel Delgado y Álex Sáez Curso AeroPython por Juan Luis Cano Rodriguez y Alejandro Sáez Mollejo se distribuye bajo una Licencia Creative Commons Atribución 4.0 Internacional. ---_Las siguientes celdas contienen configuración del Notebook__Para visualizar y utlizar los enlaces a Twitter el notebook debe ejecutarse como [seguro](http://ipython.org/ipython-doc/dev/notebook/security.html)_ File > Trusted Notebook
###Code
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = '../styles/aeropython.css'
HTML(open(css_file, "r").read())
###Output
_____no_output_____
###Markdown
Bienvenido al curso de AeroPython El objetivo de este curso es __iniciarte desde cero en la programación en Python y aprender distintas aplicaciones de este lenguaje en la ingeniería.____Nuestra herramienta fundamental de trabajo es el Notebook de Jupyter__, podrás conocer más acerca de él en las siguientes clases. Durante el curso te familiarizarás con él y aprenderás a manejarlo (este documento ha sido generado a partir de un notebook).En esta sesión inicial, veremos los pasos a seguir para que __instales Python, descargues el material y puedas empezar a aprender a tu ritmo.__ Recuerda, que todo el material del curso se encuentra disponible en [nuestro repositorio](https://github.com/AeroPython/curso_caminos-2016)._¡¡Manos a la obra!!_ Pasos a seguir: 1. Descarga de Python. La instalación de Python, el Notebook y todos los paquetes que utilizaremos, por separado puede ser una tarea ardua y agotadora, pero no te preocupes: ¡alguien ha hecho ya el trabajo duro!__[Anaconda](https://continuum.io/anaconda/) es una distribución de Python que recopila muchas de las bibliotecas necesarias en el ámbito de la computación científica__ y desde luego, todas las que necesitaremos en este curso. Además __incluye herramientas para programar en Python, como el [Notebook](https://ipython.org/index.html) y [Spyder](https://code.google.com/p/spyderlib/)__ (un IDE al estilo de MATLAB). Lo único que necesitas hacer es:* Ir a la [página de descargas de Anaconda](http://continuum.io/downloads).* Seleccionar tu sistema operativo (Windows, OSX, Linux).* Descargar Anaconda (utilizaremos Python 3.X). 2. Instalación de Python. Consulta las __[instrucciones de instalación](http://docs.continuum.io/anaconda/install.html)__ de Anaconda para tu sistema operativo. En el caso de Windows y OS X, te encontrarás con los típicos instaladores gráficos a los que ya estás acostumbrado. Si te encuentras en Linux, deberás ejectuar el script de instalación desde la consola de comandos, así que recuerda comprobar que tienes bash instalado y asignar permisos de ejecución al script.En caso de que tengas cualquier caso de duda durante el proceso, recuerda que __¡los buscadores de internet son tus mejores amigos!__¡Muy bien! Ya tienes instalado ¿pero dónde?* En __Windows__, desde `Inicio > Anaconda` verás una serie de herramientas de las que ahora dispones ¡no tengas miedo de abrirlas! * En __OS X__, podrás acceder a un launcher con las mismas herramientas desde la carpeta `anaconda` dentro de tu carpeta personal. * En __Linux__, debido al gran número de combinaciones de distribuciones más escritorios no tendrás esos accesos directos gráficos (lo que no quiere decir que no puedas crearlos tú a posteriori) pero, como comprobarás, no hacen ninguna falta y no forman parte de nuestra forma de trabajar en el curso.Ahora, vamos a __actualizar Anaconda__ para asegurarnos de que tenemos nuestra distribución de Python con todos sus paquetes al día para lo que abrimos una __ventana de comandos__ (símbolo de sistema en Windows o terminal en OS X) y ejecutamos los siguientes comandos de actualización (confirmando en el caso de tener que instalar paquetes nuevos):```conda update anacondaconda update --all```Si experimentas cualquier clase de problema durante este proceso, [desinstala tu distribución de Anaconda](http://docs.continuum.io/anaconda/install.html) y vuelve a instalarla donde puedas asegurarte de tener una conexión a internet estable.Ya tenemos nuestra distribución de Python con todos los paquetes que necesitemos (y prácticamente todos los que en un futuro podamos necesitar). _¡A trabajar!_ 3. Descarga del material del curso. El material del curso está disponible en __GitHub, una plataforma para alojar proyectos de software que también proporciona una serie de herramientas para el trabajo en equipo__. Digamos que es una especie de red social-herramienta para escribir y compartir código. (No te preocupes, no necesitarás saber nada sobre ella para seguir el curso). Simplemente ve a nuestro [repositorio del curso en GitHub](https://github.com/AeroPython/curso_caminos-2016), y en la parte derecha encontrarás un botón __*Clone or download*__ como éste:  Púlsalo, selecciona __*Download Zip*__, __guarda el archivo__ en tu ordenador y __descomprímelo__. 4. Utilización del material del curso. Una vez que instalado Python y descargado el material del curso, para poder utilizarlo debes __abrir una línea de comandos en la carpeta que has descomprimido__.* En __Windows__, puedes hacer esto desde el explorador. Primero navega hasta la carpeta y luego usa `shift + clic-derecho` en un espacio vacío de la carpeta y pulsa sobre `Abrir ventana de comandos aquí`:* En __OS X__, puedes activar el menú [nuevo terminal en carpeta](http://appleadicto.com/mac/osx/ejecutar-el-terminal-de-os-x-desde-una-carpeta-del-finder/):* En __Linux__, la totalidad de escritorios disponibles tienen una opción para lanzar un terminal en una determinada carpeta (por ejemplo, el plugin `nautilus-open-terminal` en GNOME o pulsando `F4` dentro de Dolphin en KDE). Se abrirá una línea de comandos, teclea en ella:`jupyter notebook` y pulsa Intro.__¡Es importante que la dirección que aparezca en la línea de comandos sea la correspondiente a la carpeta del curso (e.g. "curso_caminos-2016-master"), o determinados elementos como las imágenes incrustadas no se visualizarán correctamente!__Aparecerán unas cuantas líneas y __se abrirá tu navegador web predefinido. __No hace falta disponer de conexión a Internet__. Lo que está ocurriendo es que *"tu navegador está mostrando lo que le manda el programa que se está ejecutando desde la línea de comandos"*__ (entiéndelo así ya tendrás tiempo de profundizar si quieres). __Así que no cierres la línea de comandos hasta que termines de usar el notebook y ya lo hayas guardado y cerrado en tu navegador.__ En esa ventana de tu navegador puedes moverte por las carpetas y ver los archivos con extensión `.ipynb`. __Ve a la carpeta `Notebooks` y abre la primera clase haciendo click sobre ella.__ Para cambiar el estilo (letra, colores...) ve a `File > Trust Notebook`.En esa primera clase se hace una pequeña introducción a Python. __Lee el principio con calma__ para saber cómo manejar el Notebook (también puedes usar la ayuda `Help > User Interface Tour` ) y __no tengas miedo de tocar y cambiar cosas a tu antojo__. No vas a romper tu ordenador y en una de malas, siempre puedes volverte a descargar todo de GitHub. ¡Ya estás listo para empezar! ---Clase en vídeo, parte del [Curso de Python para científicos e ingenieros](http://cacheme.org/curso-online-python-cientifico-ingenieros/) grabado en la Escuela Politécnica Superior de la Universidad de Alicante.
###Code
from IPython.display import YouTubeVideo
YouTubeVideo("x4xegDME5C0", width=560, height=315, list="PLGBbVX_WvN7as_DnOGcpkSsUyXB1G_wqb")
###Output
_____no_output_____
###Markdown
--- ¡Síguenos en Twitter! Follow @AeroPython !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs'); Este notebook ha sido realizado por: Juan Luis Cano, Mabel Delgado y Álex Sáez Curso AeroPython por Juan Luis Cano Rodriguez y Alejandro Sáez Mollejo se distribuye bajo una Licencia Creative Commons Atribución 4.0 Internacional. ---_Las siguientes celdas contienen configuración del Notebook__Para visualizar y utlizar los enlaces a Twitter el notebook debe ejecutarse como [seguro](http://ipython.org/ipython-doc/dev/notebook/security.html)_ File > Trusted Notebook
###Code
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = '../styles/aeropython.css'
HTML(open(css_file, "r").read())
###Output
_____no_output_____ |
test_databases.ipynb | ###Markdown
Teste Mysql
###Code
import sqlite3
import pandas as pd
import mysql.connector
from databases import Mysql, Sqlite
from mysql.connector import errorcode
dict_df = {'Nome': ['Bruno'],
'Status': ['Casado'],
'Profissão': ['Cientista de Dados']}
df_test = pd.DataFrame(dict_df)
df_test.head()
bd = Mysql()
#Criando a tabela
query = "CREATE TABLE TEST (Nome VARCHAR(20), Status VARCHAR(20), Profissão VARCHAR(20)) CHARACTER SET 'UTF8MB4' "
bd.create_table(query=query, user='brunods', password='Bruno2208', host='127.0.0.1', database='brunods')
#Inserindo Dados
bd.insert_data(df=df_test, tabela=' TEST', user='brunods', password='Bruno2208', host='127.0.0.1', database='brunods')
# Coletando os dados
query = "SELECT * FROM brunods.TEST"
dataset = bd.retrieve_data(query=query, user='brunods', password='Bruno2208', host='127.0.0.1', database='brunods', connect=True)
dataset.head()
###Output
_____no_output_____
###Markdown
Sqlite3
###Code
bd = Sqlite()
#Criando Tabela
query = "CREATE TABLE test01 (Nome VARCHAR(20), Status VARCHAR(20), Profissão VARCHAR(20))"
bd.create_table(query=query, database='bruno-ds')
#Inserindo Dados
bd.insert_data(df=df_test, tabela='test01', database='bruno-ds')
#Coletando os dados
query="SELECT * FROM test01"
dataset = bd.retrieve_data(query=query, database='bruno-ds')
dataset.head()
###Output
Buscando os dados!!!
Conexão encerrada!!!
|
fake_news/fake_news_challenge/fake_news_challenge.ipynb | ###Markdown
FNC - FakeNewsChallengeLink: [https://github.com/FakeNewsChallenge/fnc-1](https://github.com/FakeNewsChallenge/fnc-1)This jupyter notebook covers descriptive analysis of **FNC - FakeNewsChallenge** dataset. **Note:** Repository contains more files, train, test and competition test files. In this analysis, we will analyse just **train** dataset. Attributes* **headline** - headline of the new* **body** - body of the new* **stance**: * unrelated * discuss * agree * disagree Setup and import libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Read the dataBecause data are divided into two files (stances and bodies of the news are separated), we need to join them:
###Code
# read both files
df_bodies = pd.read_csv('data/train_bodies.csv')
df_stances = pd.read_csv('data/train_stances.csv')
# let's merge both files
df = pd.merge(df_stances, df_bodies, on='Body ID')
###Output
_____no_output_____
###Markdown
Analysis Count of records
###Code
len(df)
###Output
_____no_output_____
###Markdown
Data examples
###Code
df.head()
###Output
_____no_output_____
###Markdown
More information about data
###Code
df.info()
df.describe(include='all')
###Output
_____no_output_____
###Markdown
NaN valuesAre there any NaN values in our data?
###Code
df.isnull().values.any()
###Output
_____no_output_____
###Markdown
Let's look at NaN values per each column:
###Code
df.isnull().sum().plot(kind='bar', ylim=(0, len(df)), title='NaN values per column')
###Output
_____no_output_____
###Markdown
Attributes analysis What is the distribution of fake news labels in our data?
###Code
df['Stance'].value_counts().plot(kind='bar', title='Distribution of labels')
###Output
_____no_output_____ |
basic-programming/python_101.ipynb | ###Markdown
Python 101This is an optional notebook to get you up to speed with Python in case you are new to Python or need a refresher. The material here is a crash course in Python; I highly recommend the [official Python tutorial](https://docs.python.org/3/tutorial/) for a deeper dive. Consider reading [this page](https://docs.python.org/3/tutorial/appetite.html) in the Python docs for background on Python and bookmarking the [glossary](https://docs.python.org/3/glossary.htmlglossary). Basic data types NumbersNumbers in Python can be represented as integers (e.g. `5`) or floats (e.g. `5.0`). We can perform operations on them:
###Code
5 + 6
2.5 / 3
###Output
_____no_output_____
###Markdown
BooleansWe can check for equality giving us a Boolean:
###Code
5 == 6
5 < 6
###Output
_____no_output_____
###Markdown
These statements can be combined with logical operators: `not`, `and`, `or`
###Code
(5 < 6) and not (5 == 6)
False or True
True or False
###Output
_____no_output_____
###Markdown
StringsUsing strings, we can handle text in Python. These values must be surrounded in quotes — single (`'...'`) is the standard, but double (`"..."`) works as well:
###Code
'hello'
###Output
_____no_output_____
###Markdown
We can also perform operations on strings. For example, we can see how long it is with `len()`:
###Code
len('hello')
###Output
_____no_output_____
###Markdown
We can select parts of the string by specifying the **index**. Note that in Python the 1st character is at index 0:
###Code
'hello'[0]
###Output
_____no_output_____
###Markdown
We can concatentate strings with `+`:
###Code
'hello' + ' ' + 'world'
###Output
_____no_output_____
###Markdown
We can check if characters are in the string with the `in` operator:
###Code
'h' in 'hello'
###Output
_____no_output_____
###Markdown
VariablesNotice that just typing text causes an error. Errors in Python attempt to clue us in to what went wrong with our code. In this case, we have a `NameError` exception which tells us that `'hello'` is not defined. This means that [the Python interpreter](https://docs.python.org/3/tutorial/interpreter.html) looked for a **variable** named `hello`, but it didn't find one.
###Code
hello
###Output
_____no_output_____
###Markdown
Variables give us a way to store data types. We define a variable using the `variable_name = value` syntax:
###Code
x = 5
y = 7
x + y
###Output
_____no_output_____
###Markdown
The variable name cannot contain spaces; we usually use `_` instead. The best variable names are descriptive ones:
###Code
book_title = 'Hands-On Data Analysis with Pandas'
###Output
_____no_output_____
###Markdown
Variables can be any data type. We can check which one it is with `type()`, which is a **function** (more on that later):
###Code
type(x)
type(book_title)
###Output
_____no_output_____
###Markdown
If we need to see the value of a variable, we can print it using the `print()` function:
###Code
print(book_title)
###Output
Hands-On Data Analysis with Pandas
###Markdown
Collections of Items ListsWe can store a collection of items in a list:
###Code
['hello', ' ', 'world']
###Output
_____no_output_____
###Markdown
The list can be stored in a variable. Note that the items in the list can be of different types:
###Code
my_list = ['hello', 3.8, True, 'Python']
type(my_list)
###Output
_____no_output_____
###Markdown
We can see how many elements are in the list with `len()`:
###Code
len(my_list)
###Output
_____no_output_____
###Markdown
We can also use the `in` operator to check if a value is in the list:
###Code
'world' in my_list
###Output
_____no_output_____
###Markdown
We can select items in the list just as we did with strings, by providing the index to select:
###Code
my_list[1]
###Output
_____no_output_____
###Markdown
Python also allows us to use negative values, so we can easily select the last one:
###Code
my_list[-1]
###Output
_____no_output_____
###Markdown
Another powerful feature of lists (and strings) is **slicing**. We can grab the middle 2 elements in the list:
###Code
my_list[1:3]
###Output
_____no_output_____
###Markdown
... or every other one:
###Code
my_list[::2]
###Output
_____no_output_____
###Markdown
We can even select the list in reverse:
###Code
my_list[::-1]
###Output
_____no_output_____
###Markdown
Note: This syntax is `[start:stop:step]` where the selection is inclusive of the start index, but exclusive of the stop index. If `start` isn't provided, `0` is used. If `stop` isn't provided, the number of elements is used (4, in our case); this works because the `stop` is exclusive. If `step` isn't provided, it is 1.We can use the `join()` method on a string object to concatenate all the items of a list into single string. The string we call the `join()` method on will be used as the separator, here we separate with a pipe (|):
###Code
'|'.join(['x', 'y', 'z'])
###Output
_____no_output_____
###Markdown
TuplesTuples are similar to lists; however, they can't be modified after creation i.e. they are **immutable**. Instead of square brackets, we use parenthesis to create tuples:
###Code
my_tuple = ('a', 5)
type(my_tuple)
my_tuple[0]
###Output
_____no_output_____
###Markdown
Immutable objects can't be modified:
###Code
my_tuple[0] = 'b'
###Output
_____no_output_____
###Markdown
DictionariesWe can store mappings of key-value pairs using dictionaries:
###Code
shopping_list = {
'veggies': ['spinach', 'kale', 'beets'],
'fruits': 'bananas',
'meat': 0
}
type(shopping_list)
###Output
_____no_output_____
###Markdown
To access the values associated with a specific key, we use the square bracket notation again:
###Code
shopping_list['veggies']
###Output
_____no_output_____
###Markdown
We can extract all of the keys with `keys()`:
###Code
shopping_list.keys()
###Output
_____no_output_____
###Markdown
We can extract all of the values with `values()`:
###Code
shopping_list.values()
###Output
_____no_output_____
###Markdown
Finally, we can call `items()` to get back pairs of (key, value) pairs:
###Code
shopping_list.items()
###Output
_____no_output_____
###Markdown
SetsA set is a collection of unique items; a common use is to remove duplicates from a list. These are written with curly braces also, but notice there is no key-value mapping:
###Code
my_set = {1, 1, 2, 'a'}
type(my_set)
###Output
_____no_output_____
###Markdown
How many items are in this set?
###Code
len(my_set)
###Output
_____no_output_____
###Markdown
We put in 4 items but the set only has 3 because duplicates are removed:
###Code
my_set
###Output
_____no_output_____
###Markdown
We can check if a value is in the set:
###Code
2 in my_set
###Output
_____no_output_____
###Markdown
FunctionsWe can define functions to package up our code for reuse. We have already seen some functions: `len()`, `type()`, and `print()`. They are all functions that take **arguments**. Note that functions don't need to accept arguments, in which case they are called without passing in anything (e.g. `print()` versus `print(my_string)`). *Aside: we can also create lists, sets, dictionaries, and tuples with functions: `list()`, `set()`, `dict()`, and `tuple()`* Defining functionsWe use the `def` keyword to define functions. Let's create a function called `add()` with 2 parameters, `x` and `y`, which will be the names the code in the function will use to refer to the arguments we pass in when calling it:
###Code
def add(x, y):
"""This is a docstring. It is used to explain how the code works and is optional (but encouraged)."""
# this is a comment; it allows us to annotate the code
print('Performing addition')
return x + y
###Output
_____no_output_____
###Markdown
Once we run the code above, our function is ready to use:
###Code
type(add)
###Output
_____no_output_____
###Markdown
Let's add some numbers:
###Code
add(1, 2)
###Output
Performing addition
###Markdown
Return valuesWe can store the result in a variable for later:
###Code
result = add(1, 2)
###Output
Performing addition
###Markdown
Notice the print statement wasn't captured in `result`. This variable will only have what the function **returns**. This is what the `return` line in the function definition did:
###Code
result
###Output
_____no_output_____
###Markdown
Note that functions don't have to return anything. Consider `print()`:
###Code
print_result = print('hello world')
###Output
hello world
###Markdown
If we take a look at what we got back, we see it is a `NoneType` object:
###Code
type(print_result)
###Output
_____no_output_____
###Markdown
In Python, the value `None` represents null values. We can check if our variable *is* `None`:
###Code
print_result is None
###Output
_____no_output_____
###Markdown
*Warning: make sure to use comparison operators (e.g. >, >=, <, <=, ==, !=) to compare to values other than `None`.* Function arguments*Note that function arguments can be anything, even other functions. We will see several examples of this in the text.* The function we defined requires arguments. If we don't provide them all, it will cause an error:
###Code
add(1)
###Output
_____no_output_____
###Markdown
We can use `help()` to check what arguments the function needs (notice the docstring ends up here):
###Code
help(add)
###Output
Help on function add in module __main__:
add(x, y)
This is a docstring. It is used to explain how the code works and is optional (but encouraged).
###Markdown
We will also get errors if we pass in data types that `add()` can't work with:
###Code
add(set(), set())
###Output
Performing addition
###Markdown
We will discuss error handling in the text. Control Flow StatementsSometimes we want to vary the path the code takes based on some criteria. For this we have `if`, `elif`, and `else`. We can use `if` on its own:
###Code
def make_positive(x):
"""Returns a positive x"""
if x < 0:
x *= -1
return x
###Output
_____no_output_____
###Markdown
Calling this function with negative input causes the code under the `if` statement to run:
###Code
make_positive(-1)
###Output
_____no_output_____
###Markdown
Calling this function with positive input skips the code under the `if` statement, keeping the number positive:
###Code
make_positive(2)
###Output
_____no_output_____
###Markdown
Sometimes we need an `else` statement as well:
###Code
def add_or_subtract(operation, x, y):
if operation == 'add':
return x + y
else:
return x - y
###Output
_____no_output_____
###Markdown
This triggers the code under the `if` statement:
###Code
add_or_subtract('add', 1, 2)
###Output
_____no_output_____
###Markdown
Since the Boolean check in the `if` statement was `False`, this triggers the code under the `else` statement:
###Code
add_or_subtract('subtract', 1, 2)
###Output
_____no_output_____
###Markdown
For more complicated logic, we can also use `elif`. We can have any number of `elif` statements. Optionally, we can include `else`.
###Code
def calculate(operation, x, y):
if operation == 'add':
return x + y
elif operation == 'subtract':
return x - y
elif operation == 'multiply':
return x * y
elif operation == 'division':
return x / y
else:
print("This case hasn't been handled")
###Output
_____no_output_____
###Markdown
The code keeps checking the conditions in the `if` statements from top to bottom until it finds `multiply`:
###Code
calculate('multiply', 3, 4)
###Output
_____no_output_____
###Markdown
The code keeps checking the conditions in the `if` statements from top to bottom until it hits the `else` statement:
###Code
calculate('power', 3, 4)
###Output
This case hasn't been handled
###Markdown
Loops `while` loopsWith `while` loops, we can keep running code until some stopping condition is met:
###Code
done = False
value = 2
while not done:
print('Still going...', value)
value *= 2
if value > 10:
done = True
###Output
Still going... 2
Still going... 4
Still going... 8
###Markdown
Note this can also be written as, by moving the condition to the `while` statement:
###Code
value = 2
while value < 10:
print('Still going...', value)
value *= 2
###Output
Still going... 2
Still going... 4
Still going... 8
###Markdown
`for` loopsWith `for` loops, we can run our code *for each* element in a collection:
###Code
for i in range(5):
print(i)
###Output
0
1
2
3
4
###Markdown
We can use `for` loops with lists, tuples, sets, and dictionaries as well:
###Code
for element in my_list:
print(element)
for key, value in shopping_list.items():
print('For', key, 'we need to buy', value)
###Output
For veggies we need to buy ['spinach', 'kale', 'beets']
For fruits we need to buy bananas
For meat we need to buy 0
###Markdown
With `for` loops, we don't have to worry about checking if we have reached the stopping condition. Conversely, `while` loops can cause infinite loops if we don't remember to update variables. ImportsWe have been working with the portion of Python that is available without importing additional functionality. The Python standard library that comes with the install of Python is broken up into several **modules**, but we often only need a few. We can import whatever we need: a module in the standard library, a 3rd-party library, or code that we wrote. This is done with an `import` statement:
###Code
import math
print(math.pi)
###Output
3.141592653589793
###Markdown
If we only need a small piece from that module, we can do the following instead:
###Code
from math import pi
print(pi)
###Output
3.141592653589793
###Markdown
*Warning: anything you import is added to the namespace, so if you create a new variable/function/etc. with the same name it will overwrite the previous value. For this reason, we have to be careful with variable names e.g. if you name something `sum`, you won't be able to add using the `sum()` built-in function anymore. Using notebooks or an IDE will help you avoid these issues with syntax highlighting.* Installing 3rd-party Packages**NOTE: We will cover the environment setup in the text; this is for reference.**We can use [`pip`](https://pip.pypa.io/en/stable/reference/) or [`conda`](https://docs.conda.io/projects/conda/en/latest/commands.html) to install packages, depending on how we created our virtual environment. The text walks through the commands to create virtual environments with `venv` and `conda`. The environment **MUST** be activated before installing the packages for this text; otherwise, it's possible they interfere with other projects on your machine or vice versa.To install a package, we can use `pip3 install `. Optionally, we can provide a specific version to install `pip3 install pandas==0.23.4`. Without that specification, we will get the most stable version. When we have many packages to install (as we do for this book), we will typically use a `requirements.txt` file: `pip3 install -r requirements.txt`. *Note: running `pip3 freeze > requirements.txt` will send the list of packages installed in the activate environment and their respective versions to the `requirements.txt` file.* Classes*NOTE: We will discuss this further in the text in chapter 7. For now, it is important to be aware of the syntax in this section.*So far we have used Python as a functional programming language, but we also have the option to use it for **object-oriented programming**. You can think of a `class` as a way to group similar functionality together. Let's create a calculator class which can handle mathematical operations for us. For this, we use the `class` keyword and define **methods** for taking actions on the calculator. These methods are functions that take `self` as the first argument. When calling them, we don't pass in anything for that argument (example after this):
###Code
class Calculator:
"""This is the class docstring."""
def __init__(self):
"""This is a method and it is called when we create an object of type `Calculator`."""
self.on = False
def turn_on(self):
"""This method turns on the calculator."""
self.on = True
def add(self, x, y):
"""Perform addition if calculator is on"""
if self.on:
return x + y
else:
print('the calculator is not on')
###Output
_____no_output_____
###Markdown
In order to use the calculator, we need to **instantiate** an instance or object of type `Calculator`. Since the `__init__()` method has no parameters other than `self`, we don't need to provide anything:
###Code
my_calculator = Calculator()
###Output
_____no_output_____
###Markdown
Let's try to add some numbers:
###Code
my_calculator.add(1, 2)
###Output
the calculator is not on
###Markdown
Oops!! The calculator is not on. Let's turn it on:
###Code
my_calculator.turn_on()
###Output
_____no_output_____
###Markdown
Let's try again:
###Code
my_calculator.add(1, 2)
###Output
_____no_output_____
###Markdown
We can access **attributes** on object with dot notation. In this example, the only attribute is `on`, and it is set in the `__init__()` method:
###Code
my_calculator.on
###Output
_____no_output_____
###Markdown
Note that we can also update attributes:
###Code
my_calculator.on = False
my_calculator.add(1, 2)
###Output
the calculator is not on
###Markdown
Finally, we can use `help()` to get more information on the object:
###Code
help(my_calculator)
###Output
Help on Calculator in module __main__ object:
class Calculator(builtins.object)
| This is the class docstring.
|
| Methods defined here:
|
| __init__(self)
| This is a method and it is called when we create an object of type `Calculator`.
|
| add(self, x, y)
| Perform addition if calculator is on
|
| turn_on(self)
| This method turns on the calculator.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
###Markdown
... and also for a method:
###Code
help(my_calculator.add)
###Output
Help on method add in module __main__:
add(x, y) method of __main__.Calculator instance
Perform addition if calculator is on
|
05-CNN-For-Image-Classification/01-cnn.ipynb | ###Markdown
5.1 CNN으로 패션 아이템 구분하기Convolutional Neural Network (CNN) 을 이용하여 패션아이템 구분 성능을 높여보겠습니다.
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torchvision import transforms, datasets
torch.manual_seed(42)
USE_CUDA = torch.cuda.is_available()
DEVICE = torch.device("cuda" if USE_CUDA else "cpu")
EPOCHS = 40
BATCH_SIZE = 64
###Output
_____no_output_____
###Markdown
데이터셋 불러오기
###Code
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('./.data',
train=True,
download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=BATCH_SIZE, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('./.data',
train=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=BATCH_SIZE, shuffle=True)
###Output
_____no_output_____
###Markdown
뉴럴넷으로 Fashion MNIST 학습하기
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
###Output
_____no_output_____
###Markdown
하이퍼파라미터 `to()` 함수는 모델의 파라미터들을 지정한 곳으로 보내는 역할을 합니다. 일반적으로 CPU 1개만 사용할 경우 필요는 없지만, GPU를 사용하고자 하는 경우 `to("cuda")`로 지정하여 GPU로 보내야 합니다. 지정하지 않을 경우 계속 CPU에 남아 있게 되며 빠른 훈련의 이점을 누리실 수 없습니다.최적화 알고리즘으로 파이토치에 내장되어 있는 `optim.SGD`를 사용하겠습니다.
###Code
model = Net().to(DEVICE)
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
###Output
_____no_output_____
###Markdown
훈련하기
###Code
def train(model, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(DEVICE), target.to(DEVICE)
optimizer.zero_grad()
output = model(data)
loss = F.cross_entropy(output, target)
loss.backward()
optimizer.step()
if batch_idx % 200 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
###Output
_____no_output_____
###Markdown
테스트하기아무리 훈련이 잘 되었다고 해도 실제 데이터를 만났을때 성능이 낮다면 쓸모 없는 모델일 것입니다. 우리가 진정 원하는 것은 훈련 데이터에 최적화한 모델이 아니라 모든 데이터에서 높은 성능을 보이는 모델이기 때문입니다. 세상에 존재하는 모든 데이터에 최적화 하는 것을 "일반화"라고 부르고 모델이 얼마나 실제 데이터에 적응하는지를 수치로 나타낸 것을 "일반화 오류"(Generalization Error) 라고 합니다. 우리가 만든 모델이 얼마나 일반화를 잘 하는지 알아보기 위해, 그리고 언제 훈련을 멈추어야 할지 알기 위해 매 이포크가 끝날때 마다 테스트셋으로 모델의 성능을 측정해보겠습니다.
###Code
def test(model, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(DEVICE), target.to(DEVICE)
output = model(data)
# sum up batch loss
test_loss += F.cross_entropy(output, target,
size_average=False).item()
# get the index of the max log-probability
pred = output.max(1, keepdim=True)[1]
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_accuracy = 100. * correct / len(test_loader.dataset)
return test_loss, test_accuracy
###Output
_____no_output_____
###Markdown
코드 돌려보기자, 이제 모든 준비가 끝났습니다. 코드를 돌려서 실제로 훈련이 되는지 확인해봅시다!
###Code
for epoch in range(1, EPOCHS + 1):
train(model, train_loader, optimizer, epoch)
test_loss, test_accuracy = test(model, test_loader)
print('[{}] Test Loss: {:.4f}, Accuracy: {:.2f}%'.format(
epoch, test_loss, test_accuracy))
###Output
Train Epoch: 1 [0/60000 (0%)] Loss: 2.329612
Train Epoch: 1 [12800/60000 (21%)] Loss: 1.359355
Train Epoch: 1 [25600/60000 (43%)] Loss: 0.841400
Train Epoch: 1 [38400/60000 (64%)] Loss: 0.719382
Train Epoch: 1 [51200/60000 (85%)] Loss: 0.519407
[1] Test Loss: 0.2113, Accuracy: 94.05%
Train Epoch: 2 [0/60000 (0%)] Loss: 0.382886
Train Epoch: 2 [12800/60000 (21%)] Loss: 0.603396
Train Epoch: 2 [25600/60000 (43%)] Loss: 0.494697
Train Epoch: 2 [38400/60000 (64%)] Loss: 0.591269
Train Epoch: 2 [51200/60000 (85%)] Loss: 0.423311
[2] Test Loss: 0.1283, Accuracy: 96.10%
Train Epoch: 3 [0/60000 (0%)] Loss: 0.302804
Train Epoch: 3 [12800/60000 (21%)] Loss: 0.344925
Train Epoch: 3 [25600/60000 (43%)] Loss: 0.209379
Train Epoch: 3 [38400/60000 (64%)] Loss: 0.232554
Train Epoch: 3 [51200/60000 (85%)] Loss: 0.250169
[3] Test Loss: 0.1063, Accuracy: 96.57%
Train Epoch: 4 [0/60000 (0%)] Loss: 0.446147
Train Epoch: 4 [12800/60000 (21%)] Loss: 0.154575
Train Epoch: 4 [25600/60000 (43%)] Loss: 0.106207
Train Epoch: 4 [38400/60000 (64%)] Loss: 0.223932
Train Epoch: 4 [51200/60000 (85%)] Loss: 0.293354
[4] Test Loss: 0.0817, Accuracy: 97.54%
Train Epoch: 5 [0/60000 (0%)] Loss: 0.201078
Train Epoch: 5 [12800/60000 (21%)] Loss: 0.226600
Train Epoch: 5 [25600/60000 (43%)] Loss: 0.239684
Train Epoch: 5 [38400/60000 (64%)] Loss: 0.319189
Train Epoch: 5 [51200/60000 (85%)] Loss: 0.133681
[5] Test Loss: 0.0797, Accuracy: 97.59%
Train Epoch: 6 [0/60000 (0%)] Loss: 0.088906
Train Epoch: 6 [12800/60000 (21%)] Loss: 0.144533
Train Epoch: 6 [25600/60000 (43%)] Loss: 0.105790
Train Epoch: 6 [38400/60000 (64%)] Loss: 0.222070
Train Epoch: 6 [51200/60000 (85%)] Loss: 0.355789
[6] Test Loss: 0.0717, Accuracy: 97.78%
Train Epoch: 7 [0/60000 (0%)] Loss: 0.110005
Train Epoch: 7 [12800/60000 (21%)] Loss: 0.216790
Train Epoch: 7 [25600/60000 (43%)] Loss: 0.118360
Train Epoch: 7 [38400/60000 (64%)] Loss: 0.189689
Train Epoch: 7 [51200/60000 (85%)] Loss: 0.174704
[7] Test Loss: 0.0651, Accuracy: 98.05%
Train Epoch: 8 [0/60000 (0%)] Loss: 0.312637
Train Epoch: 8 [12800/60000 (21%)] Loss: 0.253586
Train Epoch: 8 [25600/60000 (43%)] Loss: 0.240785
Train Epoch: 8 [38400/60000 (64%)] Loss: 0.052346
Train Epoch: 8 [51200/60000 (85%)] Loss: 0.147398
[8] Test Loss: 0.0610, Accuracy: 98.06%
Train Epoch: 9 [0/60000 (0%)] Loss: 0.150888
Train Epoch: 9 [12800/60000 (21%)] Loss: 0.084304
Train Epoch: 9 [25600/60000 (43%)] Loss: 0.114859
Train Epoch: 9 [38400/60000 (64%)] Loss: 0.168405
Train Epoch: 9 [51200/60000 (85%)] Loss: 0.099915
[9] Test Loss: 0.0577, Accuracy: 98.21%
Train Epoch: 10 [0/60000 (0%)] Loss: 0.196729
Train Epoch: 10 [12800/60000 (21%)] Loss: 0.232811
Train Epoch: 10 [25600/60000 (43%)] Loss: 0.198227
Train Epoch: 10 [38400/60000 (64%)] Loss: 0.199371
Train Epoch: 10 [51200/60000 (85%)] Loss: 0.146785
[10] Test Loss: 0.0553, Accuracy: 98.23%
Train Epoch: 11 [0/60000 (0%)] Loss: 0.083517
Train Epoch: 11 [12800/60000 (21%)] Loss: 0.081001
Train Epoch: 11 [25600/60000 (43%)] Loss: 0.108867
Train Epoch: 11 [38400/60000 (64%)] Loss: 0.116991
Train Epoch: 11 [51200/60000 (85%)] Loss: 0.092169
[11] Test Loss: 0.0560, Accuracy: 98.22%
Train Epoch: 12 [0/60000 (0%)] Loss: 0.300310
Train Epoch: 12 [12800/60000 (21%)] Loss: 0.079625
Train Epoch: 12 [25600/60000 (43%)] Loss: 0.048116
Train Epoch: 12 [38400/60000 (64%)] Loss: 0.186680
Train Epoch: 12 [51200/60000 (85%)] Loss: 0.138391
[12] Test Loss: 0.0544, Accuracy: 98.20%
Train Epoch: 13 [0/60000 (0%)] Loss: 0.204488
Train Epoch: 13 [12800/60000 (21%)] Loss: 0.051974
Train Epoch: 13 [25600/60000 (43%)] Loss: 0.269047
Train Epoch: 13 [38400/60000 (64%)] Loss: 0.082210
Train Epoch: 13 [51200/60000 (85%)] Loss: 0.150969
[13] Test Loss: 0.0493, Accuracy: 98.36%
Train Epoch: 14 [0/60000 (0%)] Loss: 0.247445
Train Epoch: 14 [12800/60000 (21%)] Loss: 0.219427
Train Epoch: 14 [25600/60000 (43%)] Loss: 0.229339
Train Epoch: 14 [38400/60000 (64%)] Loss: 0.207385
Train Epoch: 14 [51200/60000 (85%)] Loss: 0.076939
[14] Test Loss: 0.0516, Accuracy: 98.42%
Train Epoch: 15 [0/60000 (0%)] Loss: 0.103342
Train Epoch: 15 [12800/60000 (21%)] Loss: 0.035192
Train Epoch: 15 [25600/60000 (43%)] Loss: 0.364668
Train Epoch: 15 [38400/60000 (64%)] Loss: 0.202257
Train Epoch: 15 [51200/60000 (85%)] Loss: 0.089045
[15] Test Loss: 0.0461, Accuracy: 98.51%
Train Epoch: 16 [0/60000 (0%)] Loss: 0.220236
Train Epoch: 16 [12800/60000 (21%)] Loss: 0.148072
Train Epoch: 16 [25600/60000 (43%)] Loss: 0.173183
Train Epoch: 16 [38400/60000 (64%)] Loss: 0.116768
Train Epoch: 16 [51200/60000 (85%)] Loss: 0.215081
[16] Test Loss: 0.0452, Accuracy: 98.62%
Train Epoch: 17 [0/60000 (0%)] Loss: 0.226692
Train Epoch: 17 [12800/60000 (21%)] Loss: 0.244543
Train Epoch: 17 [25600/60000 (43%)] Loss: 0.056121
Train Epoch: 17 [38400/60000 (64%)] Loss: 0.149407
Train Epoch: 17 [51200/60000 (85%)] Loss: 0.056285
[17] Test Loss: 0.0469, Accuracy: 98.56%
Train Epoch: 18 [0/60000 (0%)] Loss: 0.165099
Train Epoch: 18 [12800/60000 (21%)] Loss: 0.070854
Train Epoch: 18 [25600/60000 (43%)] Loss: 0.117704
Train Epoch: 18 [38400/60000 (64%)] Loss: 0.041065
Train Epoch: 18 [51200/60000 (85%)] Loss: 0.183963
[18] Test Loss: 0.0457, Accuracy: 98.58%
Train Epoch: 19 [0/60000 (0%)] Loss: 0.208670
Train Epoch: 19 [12800/60000 (21%)] Loss: 0.084577
Train Epoch: 19 [25600/60000 (43%)] Loss: 0.089816
Train Epoch: 19 [38400/60000 (64%)] Loss: 0.159399
Train Epoch: 19 [51200/60000 (85%)] Loss: 0.229835
[19] Test Loss: 0.0425, Accuracy: 98.63%
Train Epoch: 20 [0/60000 (0%)] Loss: 0.176050
Train Epoch: 20 [12800/60000 (21%)] Loss: 0.131442
Train Epoch: 20 [25600/60000 (43%)] Loss: 0.233454
Train Epoch: 20 [38400/60000 (64%)] Loss: 0.117495
Train Epoch: 20 [51200/60000 (85%)] Loss: 0.177741
[20] Test Loss: 0.0419, Accuracy: 98.71%
Train Epoch: 21 [0/60000 (0%)] Loss: 0.068999
Train Epoch: 21 [12800/60000 (21%)] Loss: 0.113593
Train Epoch: 21 [25600/60000 (43%)] Loss: 0.047926
Train Epoch: 21 [38400/60000 (64%)] Loss: 0.106345
Train Epoch: 21 [51200/60000 (85%)] Loss: 0.053019
[21] Test Loss: 0.0413, Accuracy: 98.70%
Train Epoch: 22 [0/60000 (0%)] Loss: 0.286009
Train Epoch: 22 [12800/60000 (21%)] Loss: 0.216453
Train Epoch: 22 [25600/60000 (43%)] Loss: 0.027883
Train Epoch: 22 [38400/60000 (64%)] Loss: 0.091296
Train Epoch: 22 [51200/60000 (85%)] Loss: 0.102782
[22] Test Loss: 0.0434, Accuracy: 98.68%
Train Epoch: 23 [0/60000 (0%)] Loss: 0.100812
Train Epoch: 23 [12800/60000 (21%)] Loss: 0.074122
Train Epoch: 23 [25600/60000 (43%)] Loss: 0.099160
Train Epoch: 23 [38400/60000 (64%)] Loss: 0.266184
Train Epoch: 23 [51200/60000 (85%)] Loss: 0.069112
[23] Test Loss: 0.0404, Accuracy: 98.72%
Train Epoch: 24 [0/60000 (0%)] Loss: 0.119579
Train Epoch: 24 [12800/60000 (21%)] Loss: 0.197283
Train Epoch: 24 [25600/60000 (43%)] Loss: 0.060932
Train Epoch: 24 [38400/60000 (64%)] Loss: 0.135960
Train Epoch: 24 [51200/60000 (85%)] Loss: 0.116418
[24] Test Loss: 0.0391, Accuracy: 98.80%
Train Epoch: 25 [0/60000 (0%)] Loss: 0.076208
Train Epoch: 25 [12800/60000 (21%)] Loss: 0.186498
Train Epoch: 25 [25600/60000 (43%)] Loss: 0.124093
Train Epoch: 25 [38400/60000 (64%)] Loss: 0.033837
Train Epoch: 25 [51200/60000 (85%)] Loss: 0.085963
[25] Test Loss: 0.0400, Accuracy: 98.79%
Train Epoch: 26 [0/60000 (0%)] Loss: 0.156954
Train Epoch: 26 [12800/60000 (21%)] Loss: 0.165709
Train Epoch: 26 [25600/60000 (43%)] Loss: 0.084465
Train Epoch: 26 [38400/60000 (64%)] Loss: 0.202391
Train Epoch: 26 [51200/60000 (85%)] Loss: 0.095991
[26] Test Loss: 0.0397, Accuracy: 98.76%
Train Epoch: 27 [0/60000 (0%)] Loss: 0.180729
Train Epoch: 27 [12800/60000 (21%)] Loss: 0.119199
Train Epoch: 27 [25600/60000 (43%)] Loss: 0.105509
Train Epoch: 27 [38400/60000 (64%)] Loss: 0.066738
Train Epoch: 27 [51200/60000 (85%)] Loss: 0.174386
[27] Test Loss: 0.0382, Accuracy: 98.86%
Train Epoch: 28 [0/60000 (0%)] Loss: 0.179120
Train Epoch: 28 [12800/60000 (21%)] Loss: 0.115330
Train Epoch: 28 [25600/60000 (43%)] Loss: 0.094009
Train Epoch: 28 [38400/60000 (64%)] Loss: 0.099955
Train Epoch: 28 [51200/60000 (85%)] Loss: 0.162169
[28] Test Loss: 0.0396, Accuracy: 98.78%
Train Epoch: 29 [0/60000 (0%)] Loss: 0.096138
Train Epoch: 29 [12800/60000 (21%)] Loss: 0.200778
Train Epoch: 29 [25600/60000 (43%)] Loss: 0.184474
###Markdown
5.1 CNN으로 패션 아이템 구분하기Convolutional Neural Network (CNN) 을 이용하여 패션아이템 구분 성능을 높여보겠습니다.
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torchvision import transforms, datasets
torch.manual_seed(42)
USE_CUDA = torch.cuda.is_available()
DEVICE = torch.device("cuda" if USE_CUDA else "cpu")
EPOCHS = 40
BATCH_SIZE = 64
###Output
_____no_output_____
###Markdown
데이터셋 불러오기
###Code
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('./.data',
train=True,
download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=BATCH_SIZE, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('./.data',
train=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=BATCH_SIZE, shuffle=True)
###Output
0it [00:00, ?it/s]
###Markdown
뉴럴넷으로 Fashion MNIST 학습하기
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
###Output
_____no_output_____
###Markdown
하이퍼파라미터 `to()` 함수는 모델의 파라미터들을 지정한 곳으로 보내는 역할을 합니다. 일반적으로 CPU 1개만 사용할 경우 필요는 없지만, GPU를 사용하고자 하는 경우 `to("cuda")`로 지정하여 GPU로 보내야 합니다. 지정하지 않을 경우 계속 CPU에 남아 있게 되며 빠른 훈련의 이점을 누리실 수 없습니다.최적화 알고리즘으로 파이토치에 내장되어 있는 `optim.SGD`를 사용하겠습니다.
###Code
model = Net().to(DEVICE)
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
###Output
_____no_output_____
###Markdown
학습하기
###Code
def train(model, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(DEVICE), target.to(DEVICE)
optimizer.zero_grad()
output = model(data)
loss = F.cross_entropy(output, target)
loss.backward()
optimizer.step()
if batch_idx % 200 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
###Output
_____no_output_____
###Markdown
테스트하기
###Code
def test(model, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(DEVICE), target.to(DEVICE)
output = model(data)
# sum up batch loss
test_loss += F.cross_entropy(output, target,
size_average=False).item()
# get the index of the max log-probability
pred = output.max(1, keepdim=True)[1]
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_accuracy = 100. * correct / len(test_loader.dataset)
return test_loss, test_accuracy
###Output
_____no_output_____
###Markdown
코드 돌려보기자, 이제 모든 준비가 끝났습니다. 코드를 돌려서 실제로 학습이 되는지 확인해봅시다!
###Code
for epoch in range(1, EPOCHS + 1):
train(model, train_loader, optimizer, epoch)
test_loss, test_accuracy = test(model, test_loader)
print('[{}] Test Loss: {:.4f}, Accuracy: {:.2f}%'.format(
epoch, test_loss, test_accuracy))
###Output
Train Epoch: 1 [0/60000 (0%)] Loss: 2.310367
Train Epoch: 1 [12800/60000 (21%)] Loss: 1.561445
Train Epoch: 1 [25600/60000 (43%)] Loss: 0.777584
Train Epoch: 1 [38400/60000 (64%)] Loss: 0.468698
Train Epoch: 1 [51200/60000 (85%)] Loss: 0.666155
|
docs/notebooks/input_matrices_and_tensors.ipynb | ###Markdown
Input to SMURFFIn this notebook we will look at how to provide input to SMURFF with dense and sparse matrices;SMURFF accepts the following matrix files for train, test and side-info data:* for dense matrix or tensor input: [numpy.ndarrays](https://docs.scipy.org/doc/numpy-.14.0/reference/generated/numpy.ndarray.html)* for sparse matrices input: [scipy Sparse matrices](https://docs.scipy.org/doc/scipy/reference/sparse.html) in COO, CSR or CSC format* for sparse tensors: a wrapper around a [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html)Let's have a look on how this could work. Dense Train Input
###Code
# dense input
Ydense = np.random.rand(10, 20)
session = smurff.TrainSession(burnin = 5, nsamples = 5)
session.addTrainAndTest(Ydense)
session.run()
###Output
_____no_output_____
###Markdown
Sparse Matrix InputThe so-called *zero* elements in sparse matrices can either represent1. missing values, also called 'unknown' or 'not-available' (NA) values.2. actual zero values, to optimize the space that stores the matrix**Important**:* when calling `addTrainAndTest(Ytrain, Ytest, is_scarce)` the `is_scarce` refers to the `Ytrain` matrix. `Ytest` is *always* scarce.* when calling `addSideInfo(mode, sideinfoMatrix)` with a sparse `sideinfoMatrix`, this matrix is always fully known.
###Code
# sparse matrix input with 20% zeros (fully known)
Ysparse = sp.rand(15, 10, 0.2)
session = smurff.TrainSession(burnin = 5, nsamples = 5)
session.addTrainAndTest(Ysparse, is_scarce = False)
session.run()
# sparse matrix input with unknowns (the default)
Yscarce = sp.rand(15, 10, 0.2)
session = smurff.TrainSession(burnin = 5, nsamples = 5)
session.addTrainAndTest(Yscarce, is_scarce = True)
session.run()
###Output
_____no_output_____
###Markdown
Tensor inputSMURFF also supports tensor factorization with and without side information on any of the modes. Tensor can be thought as generalization of matrix to relations with more than two items. For example 3-tensor of `drug x cell x gene` could express the effect of a drug on the given cell and gene. In this case the prediction for the element `Yhat[i,j,k]`* is given by$$ \hat{Y}_{ijk} = \sum_{d=1}^{D}u^{(1)}_{d,i}u^{(2)}_{d,j}u^{(3)}_{d,k} + mean $$Visually the model can be represented as follows:Tensor model predicts `Yhat[i,j,k]` by multiplying all latent vectors together element-wise and then taking the sum along the latent dimension (figure omits the global mean).For tensors SMURFF implements a `SparseTensor` class. `SparseTensor` is a wrapper around a pandas `DataFrame` where each row stores the coordinate and the value of a known cell in the tensor. Specifically, the integer columns in the DataFrame give the coordinate of the cell and `float` (or double) column stores the value in the cell (the order of the columns does not matter). The coordinates are 0-based. The shape of the `SparseTensor` can be provided, otherwise it is inferred from the maximum index in each mode.Here is a simple toy example with factorizing a 3-tensor with side information on the first mode.
###Code
import numpy as np
import pandas as pd
import scipy.sparse
import smurff
import itertools
## generating toy data
A = np.random.randn(15, 2)
B = np.random.randn(3, 2)
C = np.random.randn(2, 2)
idx = list( itertools.product(np.arange(A.shape[0]),
np.arange(B.shape[0]),
np.arange(C.shape[0])) )
df = pd.DataFrame( np.asarray(idx), columns=["A", "B", "C"])
df["value"] = np.array([ np.sum(A[i[0], :] * B[i[1], :] * C[i[2], :]) for i in idx ])
## assigning 20% of the cells to test set
Ytrain, Ytest = smurff.make_train_test_df(df, 0.2)
print("Ytrain = ", Ytrain)
## for artificial dataset using small values for burnin, nsamples and num_latents is fine
predictions = smurff.BPMFSession(
Ytrain=Ytrain,
Ytest=Ytest,
num_latent=4,
burnin=20,
nsamples=20).run()
print("First prediction of Ytest tensor: ", predictions[0])
###Output
_____no_output_____
###Markdown
Input to SMURFFIn this notebook we will look at how to provide input to SMURFF with dense and sparse matrices;SMURFF accepts the following matrix files for train, test and side-info data:* for dense matrix or tensor input: [numpy.ndarrays](https://docs.scipy.org/doc/numpy-.14.0/reference/generated/numpy.ndarray.html)* for sparse matrices input: [scipy Sparse matrices](https://docs.scipy.org/doc/scipy/reference/sparse.html) in COO, CSR or CSC format* for sparse tensors: a wrapper around a [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html)Let's have a look on how this could work. Dense Train Input
###Code
# dense input
Ydense = np.random.rand(10, 20)
trainSession = smurff.TrainSession(burnin = 5, nsamples = 5)
trainSession.addTrainAndTest(Ydense)
trainSession.run()
###Output
_____no_output_____
###Markdown
Sparse Matrix InputThe so-called *zero* elements in sparse matrices can either represent1. missing values, also called 'unknown' or 'not-available' (NA) values.2. actual zero values, to optimize the space that stores the matrix**Important**:* when calling `addTrainAndTest(Ytrain, Ytest, is_scarce)` the `is_scarce` refers to the `Ytrain` matrix. `Ytest` is *always* scarce.* when calling `addSideInfo(mode, sideinfoMatrix)` with a sparse `sideinfoMatrix`, this matrix is always fully known.
###Code
# sparse matrix input with 20% zeros (fully known)
Ysparse = sp.rand(15, 10, 0.2)
trainSession = smurff.TrainSession(burnin = 5, nsamples = 5)
trainSession.addTrainAndTest(Ysparse, is_scarce = False)
trainSession.run()
# sparse matrix input with unknowns (the default)
Yscarce = sp.rand(15, 10, 0.2)
trainSession = smurff.TrainSession(burnin = 5, nsamples = 5)
trainSession.addTrainAndTest(Yscarce, is_scarce = True)
trainSession.run()
###Output
_____no_output_____
###Markdown
Tensor inputSMURFF also supports tensor factorization with and without side information on any of the modes. Tensor can be thought as generalization of matrix to relations with more than two items. For example 3-tensor of `drug x cell x gene` could express the effect of a drug on the given cell and gene. In this case the prediction for the element `Yhat[i,j,k]`* is given by$$ \hat{Y}_{ijk} = \sum_{d=1}^{D}u^{(1)}_{d,i}u^{(2)}_{d,j}u^{(3)}_{d,k} + mean $$Visually the model can be represented as follows:Tensor model predicts `Yhat[i,j,k]` by multiplying all latent vectors together element-wise and then taking the sum along the latent dimension (figure omits the global mean).For tensors SMURFF implements a `SparseTensor` class. `SparseTensor` can be constructed from a pandas `DataFrame` where each row stores the coordinate and the value of a known cell in the tensor. Specifically, the integer columns in the DataFrame give the coordinate of the cell and `float` (or double) column stores the value in the cell (the order of the columns does not matter). The coordinates are 0-based. The shape of the `SparseTensor` can be provided, otherwise it is inferred from the maximum index in each mode.Here is a simple toy example with factorizing a 3-tensor with side information on the first mode.
###Code
import numpy as np
import pandas as pd
import scipy.sparse
import smurff
import itertools
## generating toy data
A = np.random.randn(15, 2)
B = np.random.randn(3, 2)
C = np.random.randn(2, 2)
idx = list( itertools.product(np.arange(A.shape[0]),
np.arange(B.shape[0]),
np.arange(C.shape[0])) )
df = pd.DataFrame( np.asarray(idx), columns=["A", "B", "C"])
df["value"] = np.array([ np.sum(A[i[0], :] * B[i[1], :] * C[i[2], :]) for i in idx ])
## assigning 20% of the cells to test set
Ytrain, Ytest = smurff.make_train_test(df, 0.2)
print("Ytrain = ", Ytrain)
## for artificial dataset using small values for burnin, nsamples and num_latents is fine
predictions = smurff.BPMFSession(
Ytrain=Ytrain,
Ytest=Ytest,
num_latent=4,
burnin=20,
nsamples=20).run()
print("First prediction of Ytest tensor: ", predictions[0])
###Output
_____no_output_____ |
report/generate_figures/Fine_tuning.ipynb | ###Markdown
Table of Contents
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
%cd ../..
import pickle
from notebooks import utils
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
sns.set()
from notebooks import utils
sns.set_style("whitegrid")
from collections import OrderedDict
import numpy as np
exp_dir = "experiments/retrain/"
outfp = "report/figures/fine_tuning.png"
#results = utils.extract_res_from_files(exp_dir)
TRAIN1 = [("L1", 50)]
TRAIN2 = [("L2", 150), ("L1", 50)]
#names
models = OrderedDict([
("tucodec_relu_vanilla", {"loc": 'experiments/06a5/12', "sched": TRAIN1}),
("tucodec_prelu_next", {"loc": 'experiments/DA3/06a/1/', "sched": TRAIN1}),
("RDB3_27_4", {"loc": 'experiments/09a/09a/0', "sched": TRAIN2}),
("ResNeXt_27_1", {"loc": 'experiments/09a/09a/2', "sched": TRAIN2}),
("RAB_4_next", {"loc": 'experiments/03c/10/', "sched": TRAIN2}),
("GDRN_CBAM", {"loc": 'experiments/09c/0', "sched": TRAIN2})])
keys= ["mse_DA", "time", "l1_loss", "l2_loss"]
epochs1 = list(range(150, 360, 10)) + [299, 349]
#print(epochs1)
results= utils.extract_res_from_files2(exp_dir, epochs1, keys)
name_dict = {'tucodec_relu_vanilla': "Tucodec-vanilla",
'tucodec_prelu_next': "Tucodec-NeXt",
'RDB3_27_4': "RDB3-27-4-vanilla+CBAM",
'ResNeXt_27_1': "ResNeXt3-27-1-vanilla+CBAM",
'RAB_4_next': "RAB-4-NeXt",
'GDRN_CBAM': "GRDN-NeXt+CBAM"
}
dfs = []
ignore = ["tucodec_relu_vanilla", "tucodec_prelu_next", "RDB3_27_4"] #"RAB_4_next"]
for result in results:
#model_data = result["model_data"]
model_name = result["path"].split("/")[-1]
if model_name in ignore:
continue
label = name_dict[model_name]
df = result["df"].copy()
df["Model"] = label
dfs.append(df)
print(df.shape)
DF = pd.concat(dfs, ignore_index=True)
LARGE = 0.3
REPLACE = 0.25
#update large values with manageably large values
DF["mse_DA"] = DF["mse_DA"].apply(lambda x: x if x < LARGE else REPLACE)
DF["Epoch"] = DF["epoch"]
DF["MSE DA"] = DF["mse_DA"]
print(DF.mse_DA.mean())
DF[DF ["mse_DA"] >= LARGE]
ALPHA_TRAIN = 0.25
ALPHA_TEST = 0.25
ax = sns.lineplot(x="Epoch", y="MSE DA",
hue="Model", style="Subset",
data=DF, palette=["r", "b", 'g', ]) #"y", ])
ax.set_ylim(( 0.1, 0.22))
ax.set_xlim((150, 350))
#add dotted lines for tucodec values
y = np.linspace(0.0,1,10)
x = 0 * y + 300
ax.plot(x, y, '-', color="darkslategrey")
fig = plt.gcf()
fig.set_size_inches(15, 5.5)
fig.savefig(outfp)
###Output
_____no_output_____
###Markdown
ax = sns.lineplot(x="Epoch", y="l2_loss", hue="Model", style="Subset", data=DF, palette=["r", "b", 'g', ]), "y", ])ax.set_ylim(( 800, 4000))fig = plt.gcf()fig.set_size_inches(15, 7)
###Code
fig, axs = plt.subplots(1, 2, sharey=False)
metrics = ["mse_DA", "l2_loss"]
colors = ["r", "b", 'g', "y", ]
ylim1 = (0.05, 0.4)
names = list(models.keys()) #ignore tucodec
print(names)
for result in results:
test_df = result["test_df"]
train_df = result["train_df"]
settings = result["settings"]
axs[0]
axs[0].set_ylabel('DA MSE', )
axs[0].set_xlabel('Epoch', )
axs[0].plot(train_df.epoch, 'mse_DA', data=train_df, marker='+', color="g", )
axs[0].plot(train_df.epoch, 'mse_DA', data=test_df, marker='x', color="r")
axs[0].tick_params(axis='y',)
axs[0].set_ylim(ylim1)
fig.set_size_inches(15, 7)
# ax = plt.plot(test_df.epoch, test_df[metric], 'ro-')
# plt.plot(train_df.epoch, train_df[metric], 'g+-')
# plt.grid(True, axis='y', )
# ax[0].grid(True, axis='x', )
#
caption
Note that unlike in \ref{fig:augmentation} which gives the L2 Reconstruction error,
###Output
_____no_output_____ |
data structure/array and linked list/Linked List Practice.ipynb | ###Markdown
Linked List PracticeImplement a linked list class. Your class should be able to:+ Append data to the tail of the list and prepend to the head+ Search the linked list for a value and return the node+ Remove a node+ Pop, which means to return the first node's value and delete the node from the list+ Insert data at some position in the list+ Return the size (length) of the linked list
###Code
class Node:
def __init__(self, value):
self.value = value
self.next = None
class LinkedList:
def __init__(self):
self.head = None
def prepend(self, value):
""" Prepend a value to the beginning of the list. """
# TODO: Write function to prepend here
pass
def append(self, value):
""" Append a value to the end of the list. """
# TODO: Write function to append here
pass
def search(self, value):
""" Search the linked list for a node with the requested value and return the node. """
# TODO: Write function to search here
pass
def remove(self, value):
""" Remove first occurrence of value. """
# TODO: Write function to remove here
pass
def pop(self):
""" Return the first node's value and remove it from the list. """
# TODO: Write function to pop here
pass
def insert(self, value, pos):
""" Insert value at pos position in the list. If pos is larger than the
length of the list, append to the end of the list. """
# TODO: Write function to insert here
pass
def size(self):
""" Return the size or length of the linked list. """
# TODO: Write function to get size here
pass
def to_list(self):
out = []
node = self.head
while node:
out.append(node.value)
node = node.next
return out
## Test your implementation here
# Test prepend
linked_list = LinkedList()
linked_list.prepend(1)
assert linked_list.to_list() == [1], f"list contents: {linked_list.to_list()}"
linked_list.append(3)
linked_list.prepend(2)
assert linked_list.to_list() == [2, 1, 3], f"list contents: {linked_list.to_list()}"
# Test append
linked_list = LinkedList()
linked_list.append(1)
assert linked_list.to_list() == [1], f"list contents: {linked_list.to_list()}"
linked_list.append(3)
assert linked_list.to_list() == [1, 3], f"list contents: {linked_list.to_list()}"
# Test search
linked_list.prepend(2)
linked_list.prepend(1)
linked_list.append(4)
linked_list.append(3)
assert linked_list.search(1).value == 1, f"list contents: {linked_list.to_list()}"
assert linked_list.search(4).value == 4, f"list contents: {linked_list.to_list()}"
# Test remove
linked_list.remove(1)
assert linked_list.to_list() == [2, 1, 3, 4, 3], f"list contents: {linked_list.to_list()}"
linked_list.remove(3)
assert linked_list.to_list() == [2, 1, 4, 3], f"list contents: {linked_list.to_list()}"
linked_list.remove(3)
assert linked_list.to_list() == [2, 1, 4], f"list contents: {linked_list.to_list()}"
# Test pop
value = linked_list.pop()
assert value == 2, f"list contents: {linked_list.to_list()}"
assert linked_list.head.value == 1, f"list contents: {linked_list.to_list()}"
# Test insert
linked_list.insert(5, 0)
assert linked_list.to_list() == [5, 1, 4], f"list contents: {linked_list.to_list()}"
linked_list.insert(2, 1)
assert linked_list.to_list() == [5, 2, 1, 4], f"list contents: {linked_list.to_list()}"
linked_list.insert(3, 6)
assert linked_list.to_list() == [5, 2, 1, 4, 3], f"list contents: {linked_list.to_list()}"
# Test size
assert linked_list.size() == 5, f"list contents: {linked_list.to_list()}"
###Output
_____no_output_____ |
02_dog_breed_classifier/dog_app-cn01.ipynb | ###Markdown
卷积神经网络 项目:为小狗识别应用编写算法 ---在此 notebook 中,我们已经为你提供一些模板代码,要成功完成此项目,你需要实现其他功能。除此之外,不需要修改所提供的代码。标题中以**(实现)**开头的部分表明你必须在下面的代码块中提供其他功能。我们会在每个部分提供说明,并在以“TODO”开头的代码块中提供实现细节。请仔细阅读说明。 > **注意**:完成所有代码实现后,最后需要将 iPython Notebook 导出为 HTML 文档。在将 notebook 导出为 HTML 前,请运行所有代码单元格,使审阅者能够查看最终实现和输出结果。然后导出 notebook,方法是:使用顶部的菜单并依次转到**文件 -> 下载为 -> HTML (.html)**。提交内容应该同时包含此 notebook 和完成的文档。除了实现代码之外,还需要回答与项目和代码实现相关的问题。请仔细阅读每个问题,并在**答案:**下方的文本框中填写答案。我们将根据每个问题的答案以及实现代码评估你提交的项目。>**注意:**可以通过 **Shift + Enter** 键盘快捷键执行代码和标记单元格,并且可以通过双击单元格进入编辑模式,编辑标记单元格。审阅标准还包含可选的“锦上添花”建议,可以指导你在满足最低要求的基础上改进项目。如果你打算采纳这些建议,则应该在此 Jupyter notebook 中添加代码。--- 为何要完成这道练习 在此 notebook 中,你将开发一种可用于移动应用或网络应用的算法。最终你的代码将能够将任何用户提供的图像作为输入。如果从图像中检测出小狗,该算法将大致识别出小狗品种。如果检测出人脸,该算法将大致识别出最相似的小狗品种。下图显示了最终项目的潜在示例输出(但是我们希望每个学员的算法行为都不一样。)。 在此实际应用中,你需要将一系列模型整合到一起并执行不同的任务;例如,检测图中人脸的算法与推理小狗品种的 CNN 将不一样。有很多地方都可能会出错,没有什么完美的算法。即使你的答案不完美,也可以创造有趣的用户体验。 项目规划我们将此 notebook 分成了几个独立的步骤。你可以通过以下链接浏览此 notebook。* [第 0 步](step0):导入数据集* [第 1 步](step1):检测人脸* [第 2 步](step2):检测小狗* [第 3 步](step3):(从头开始)创建分类小狗品种的 CNN* [第 4 步](step4):(使用迁移学习)创建分类小狗品种的 CNN* [第 5 步](step5):编写算法* [第 6 步](step6):测试算法--- 第 0 步:导入数据集首先下载人脸和小狗数据集:* 下载[小狗数据集](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/dogImages.zip)。解压文件并将其放入此项目的主目录中,位置为 `/dog_images`。 * 下载[人脸数据集](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/lfw.zip)。解压文件并将其放入此项目的主目录中,位置为 `/lfw`。 *注意如果你使用的是 Windows 设备,建议使用 [7zip](http://www.7-zip.org/) 解压文件。*在下面的代码单元格中将人脸 (LFW) 数据集和小狗数据集的文件路径保存到 NumPy 数组 `human_files` 和 `dog_files` 中。
###Code
import numpy as np
from glob import glob
# load filenames for human and dog images
# human_files = np.array(glob("/data/lfw/*/*"))
# dog_files = np.array(glob("/data/dog_images/*/*/*"))
human_files = np.array(glob("E:\DL_training_datas\data\lfw\*\*"))
dog_files = np.array(glob("E:\DL_training_datas\data\dog_images\*\*\*"))
# print number of images in each dataset
print('There are %d total human images.' % len(human_files))
print('There are %d total dog images.' % len(dog_files))
###Output
There are 13233 total human images.
There are 8351 total dog images.
###Markdown
第 1 步:检测人脸在此部分,我们使用 OpenCV 的[哈儿特征级联分类器](http://docs.opencv.org/trunk/d7/d8b/tutorial_py_face_detection.html)检测图像中的人脸。 OpenCV 提供了很多预训练的人脸检测器,它们以 XML 文件的形式存储在 [github](https://github.com/opencv/opencv/tree/master/data/haarcascades) 上。我们下载了其中一个检测器并存储在 `haarcascades` 目录中。在下个代码单元格中,我们将演示如何使用此检测器从样本图像中检测人脸。
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')
# load color (BGR) image
img = cv2.imread(human_files[0])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# find faces in image
faces = face_cascade.detectMultiScale(gray)
# print number of faces detected in the image
print('Number of faces detected:', len(faces))
# get bounding box for each detected face
for (x,y,w,h) in faces:
# add bounding box to color image
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()
###Output
Number of faces detected: 1
###Markdown
在使用任何人脸检测器之前,标准做法是将图像转换为灰阶图像。`detectMultiScale` 函数会执行存储在 `face_cascade` 中的分类器并将灰阶图像当做参数。 在上述代码中,`faces` 是一个包含检测到的人脸的 numpy 数组,其中每行对应一张检测到的人脸。检测到的每张人脸都是一个一维数组,其中有四个条目,分别指定了检测到的人脸的边界框。数组中的前两个条目(在上述代码中提取为 `x` 和`y`)指定了左上角边界框的水平和垂直位置。数组中的后两个条目(提取为 `w` 和 `h`)指定了边界框的宽和高。 编写人脸检测器我们可以编写一个函数,如果在图像中检测到人脸,该函数将返回 `True`,否则返回 `False`。此函数称为 `face_detector`,参数为图像的字符串文件路径,并出现在以下代码块中。
###Code
# returns "True" if face is detected in image stored at img_path
def face_detector(img_path):
img = cv2.imread(img_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray)
return len(faces) > 0
###Output
_____no_output_____
###Markdown
(实现)评估人脸检测器__问题 1:__使用以下代码单元格测试 `face_detector` 函数的性能。 - 对于 `human_files` 中的前100 张图像,有多少图像检测到了人脸? - 对于 `dog_files` 中的前100 张图像,有多少图像检测到了人脸? 理想情况下,我们希望所有人脸图像都能检测到人脸,所有小狗图像都不能检测到人脸。我们的算法不能满足此目标,但是依然达到了可接受的水平。我们针对每个数据集的前 100 张图像提取出文件路径,并将它们存储在 numpy 数组 `human_files_short` 和 `dog_files_short` 中。__答案:__ human_files 中的前100 张图像,有96张图像检测到了人脸;dog_files 中的前100 张图像,有18张图像检测到了人脸(请在此单元格中填写结果和/或百分比)
###Code
from tqdm import tqdm
human_files_short = human_files[:100]
dog_files_short = dog_files[:100]
#-#-# Do NOT modify the code above this line. #-#-#
## TODO: Test the performance of the face_detector algorithm
## on the images in human_files_short and dog_files_short.
face_in_human_files = 0
face_in_dog_files = 0
for i in range(100):
if face_detector(human_files_short[i]):
face_in_human_files += 1
if face_detector(dog_files_short[i]):
face_in_dog_files += 1
print("human_files 中的前100 张图像,有%d张图像检测到了人脸;dog_files 中的前100 张图像,有%d张图像检测到了人脸" %(face_in_human_files, face_in_dog_files))
###Output
human_files 中的前100 张图像,有96张图像检测到了人脸;dog_files 中的前100 张图像,有18张图像检测到了人脸
###Markdown
建议在算法中使用 OpenCV 的人脸检测器来检测人脸图像,但是你也可以尝试其他方法,尤其是利用深度学习的方法:)。请在以下代码单元格中设计并测试你的人脸检测算法。如果你打算完成此_可选_任务,请报告 `human_files_short` 和 `dog_files_short` 的效果。
###Code
### (Optional)
### TODO: Test performance of another face detection algorithm.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
--- 第 2 步:检测小狗在此部分,我们使用[预训练的模型](http://pytorch.org/docs/master/torchvision/models.html)检测图像中的小狗。 获取预训练的 VGG-16 模型以下代码单元格会下载 VGG-16 模型以及在 [ImageNet](http://www.image-net.org/) 上训练过的权重,ImageNet 是一个非常热门的数据集,可以用于图像分类和其他视觉任务。ImageNet 包含 1000 万以上的 URL,每个都链接到包含某个对象的图像,这些对象分成了 [1000 个类别](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a)。
###Code
import torch
import torchvision.models as models
# define VGG16 model
VGG16 = models.vgg16(pretrained=True)
# check if CUDA is available
use_cuda = torch.cuda.is_available()
# use_cuda = False
# move model to GPU if CUDA is available
if use_cuda:
VGG16 = VGG16.cuda()
print("torch.version:",torch.__version__)
print("torch.version.cuda:",torch.version.cuda)
print("use_cuda:",use_cuda)
###Output
torch.version: 1.2.0+cu92
torch.version.cuda: 9.2
use_cuda: True
###Markdown
如果给定一张图像,此预训练的 VGG-16 模型能够针对图像中的对象返回预测结果(属于 ImageNet 中的 1000 个潜在类别之一)。 (实现)使用预训练的模型做出预测在下个代码单元格中,你将编写一个函数,它将图像路径(例如 `'dogImages/train/001.Affenpinscher/Affenpinscher_00001.jpg'`)当做输入,并返回预训练 VGG-16 模型预测的 ImageNet 类别对应的索引。输出应该始终是在 0 - 999(含)之间的整数。在编写该函数之前,请阅读此 [PyTorch 文档](http://pytorch.org/docs/stable/torchvision/models.html),了解如何针对预训练的模型预处理张量。
###Code
from PIL import Image
import torchvision.transforms as transforms
from torch.autograd import Variable
img_transforms = transforms.Compose([transforms.CenterCrop(224),
transforms.ToTensor()])
# import torchsnooper
# @torchsnooper.snoop()
def VGG16_predict(img_path):
'''
Use pre-trained VGG-16 model to obtain index corresponding to
predicted ImageNet class for image at specified path
Args:
img_path: path to an image
Returns:
Index corresponding to VGG-16 model's prediction
'''
## TODO: Complete the function.
## Load and pre-process an image from the given img_path
## Return the *index* of the predicted class for that image
VGG16.eval()
img = Image.open(img_path)
img_tensor = img_transforms(img).float()
img_tensor = img_tensor.unsqueeze_(0)
if use_cuda:
img_tensor = img_tensor.cuda()
output = VGG16(img_tensor)
# index = output.data.numpy().argmax()
_, pred = torch.max(output, 1)
# print("pred:{}".format(pred.item()))
return pred.item() # predicted class index
###Output
_____no_output_____
###Markdown
(实现)编写小狗检测器查看该[字典](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a)后,你将发现:小狗对应的类别按顺序排列,对应的键是 151-268(含),包含从 `'Chihuahua'` 到 `'Mexican hairless'` 的所有类别。因此,要检查预训练的 VGG-16 模型是否预测某个图像包含小狗,我们只需检查预训练模型预测的索引是否在 151 - 268(含)之间。请根据这些信息完成下面的 `dog_detector` 函数,如果从图像中检测出小狗,它将返回 `True`(否则返回 `False`)。
###Code
### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
## TODO: Complete the function.
index = VGG16_predict(img_path)
return index >= 151 and index <= 268 # true/false
###Output
_____no_output_____
###Markdown
(实现)评估小狗检测器__问题 2:__在以下代码单元格中测试 `dog_detector` 的效果。 - 对于 `human_files_short` 中的图像,有多少图像检测到了小狗? - 对于 `dog_files_short` 中的图像,有多少图像检测到了小狗?__答案:__human_files 中的前100 张图像,有0张图像检测到了小狗;dog_files 中的前100 张图像,有86张图像检测到了小狗
###Code
### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.
dog_in_human_files = 0
dog_in_dog_files = 0
for i in range(100):
if dog_detector(human_files_short[i]):
dog_in_human_files += 1
if dog_detector(dog_files_short[i]):
dog_in_dog_files += 1
print("human_files 中的前100 张图像,有%d张图像检测到了小狗;dog_files 中的前100 张图像,有%d张图像检测到了小狗" %(dog_in_human_files, dog_in_dog_files))
###Output
human_files 中的前100 张图像,有0张图像检测到了小狗;dog_files 中的前100 张图像,有86张图像检测到了小狗
###Markdown
建议在算法中使用 VGG-16 检测小狗图像,但是你也可以尝试其他预训练的网络(例如 [Inception-v3](http://pytorch.org/docs/master/torchvision/models.htmlinception-v3)、[ResNet-50](http://pytorch.org/docs/master/torchvision/models.htmlid3) 等)。请在以下代码单元格中测试其他预训练的 PyTorch 模型。如果你打算完成此_可选_任务,请报告 `human_files_short` 和 `dog_files_short` 的效果。
###Code
### (Optional)
### TODO: Report the performance of another pre-trained network.
### Feel free to use as many code cells as needed.
# resnet50 = models.resnet50(pretrained=True)
# print(resnet50)
# print("===================================")
# inception_v3 = models.inception_v3(pretrained=True)
# print(inception_v3)
# print("===================================")
# alexnet = models.alexnet(pretrained=True)
# print(alexnet)
###Output
_____no_output_____
###Markdown
--- 第 3 步:(从头开始)创建分类小狗品种的 CNN创建好从图像中检测人脸和小狗的函数后,我们需要预测图像中的小狗品种。在这一步,你需要创建一个分类小狗品种的 CNN。你必须从头创建一个 CNN(因此暂时不能使用迁移学习。),并且测试准确率必须至少达到 10%。在此 notebook 的第 4 步,你将使用迁移学习创建 CNN,并且能够获得很高的准确率。预测图中小狗的品种是一项非常难的挑战。说实话,即使是我们人类,也很难区分布列塔尼猎犬和威尔斯激飞猎犬。 布列塔尼猎犬 | 威尔斯激飞猎犬- | - | 还有很多其他相似的狗品种(例如卷毛寻回犬和美国水猎犬)。 卷毛寻回犬 | 美国水猎犬- | - | 同理,拉布拉多有黄色、巧克力色和黑色品种。基于视觉的算法需要克服这种同一类别差异很大的问题,并决定如何将所有这些不同肤色的小狗分类为相同的品种。 黄色拉布拉多 | 巧克力色拉布拉多 | 黑色拉布拉多- | - | | 随机猜测的效果很差:除了类别数量不太平衡之外,随机猜测的正确概率约为 1/133,准确率不到 1%。 在深度学习领域,实践比理论知识靠谱得到。请尝试多种不同的架构,并相信你的直觉。希望你可以从学习中获得乐趣! (实现)为小狗数据集指定数据加载器在以下代码单元格中编写三个独立的[数据加载器](http://pytorch.org/docs/stable/data.htmltorch.utils.data.DataLoader),用于训练、验证和测试小狗图像数据集(分别位于 `dog_images/train`、`dog_images/valid` 和 `dog_images/test` 下)。[此自定义数据集文档](http://pytorch.org/docs/stable/torchvision/datasets.html)或许对你有帮助。如果你想增强训练和/或验证数据,请参阅各种[转换方法](http://pytorch.org/docs/stable/torchvision/transforms.html?highlight=transform)!
###Code
import os
from torchvision import datasets
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
### TODO: Write data loaders for training, validation, and test sets
## Specify appropriate transforms, and batch_sizes
train_transforms = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5],
[0.5, 0.5, 0.5])])
test_transforms = transforms.Compose([transforms.CenterCrop(224),
transforms.ToTensor()])
# data_dir = '/data/dog_images'
data_dir = 'E:\DL_training_datas\data\dog_images'
train_dir = os.path.join(data_dir, 'train')
valid_dir = os.path.join(data_dir, 'valid')
test_dir = os.path.join(data_dir, 'test')
train_data = datasets.ImageFolder(train_dir, transform=train_transforms)
valid_data = datasets.ImageFolder(valid_dir, transform=test_transforms)
test_data = datasets.ImageFolder(test_dir, transform=test_transforms)
# batch_size = 20
batch_size = 1
num_workers=0
# prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
num_workers=num_workers, shuffle=True)
# train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
# sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(valid_data, batch_size=batch_size,
num_workers=num_workers, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers, shuffle=True)
loaders_scratch = {'train': train_loader, 'valid': valid_loader, 'test': test_loader}
print(train_data)
print("train data len:{}".format(len(train_data)))
print(train_loader)
print(len(train_loader))
###Output
Dataset ImageFolder
Number of datapoints: 6680
Root location: E:\DL_training_datas\data\dog_images\train
StandardTransform
Transform: Compose(
RandomRotation(degrees=(-30, 30), resample=False, expand=False)
RandomResizedCrop(size=(224, 224), scale=(0.08, 1.0), ratio=(0.75, 1.3333), interpolation=PIL.Image.BILINEAR)
RandomHorizontalFlip(p=0.5)
ToTensor()
Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
)
train data len:6680
<torch.utils.data.dataloader.DataLoader object at 0x000001F7820324E0>
334
###Markdown
**问题 3:**描述你所选的数据预处理流程。 - 你是如何调整图像大小的(裁剪、拉伸等)?你选择的输入张量大小是多少,为何?- 你是否决定增强数据集?如果是,如何增强(平移、翻转、旋转等)?如果否,理由是?**答案**:- (1).通过使用torchvision的transforms.RandomResizedCrop,调整训练集图像的大小。输入的张量大小为,因为...* (2).对训练数据集进行了增强数据集操作。分别通过torchvision的transforms.RandomRotation,transforms.RandomHorizontalFlip对图像进行随机旋转、翻转操作。 (实现)模型架构创建分类小狗品种的 CNN。使用以下代码单元格中的模板。
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
### TODO: choose an architecture, and complete the class
def __init__(self):
super(Net, self).__init__()
## Define layers of a CNN
# self.conv1 = nn.Conv2d(3, 16, 3, padding = 1)
# self.conv2 = nn.Conv2d(16, 32, 3, padding = 1)
# self.conv3 = nn.Conv2d(32, 64, 3, padding = 1)
# self.conv4 = nn.Conv2d(64, 64, 3, padding = 1)
# self.conv5 = nn.Conv2d(64, 128, 3, padding = 1)
# self.conv6 = nn.Conv2d(128, 128, 3, padding = 1)
# self.conv7 = nn.Conv2d(128, 256, 3, padding = 1)
##Max pooling layers
# self.pool = nn.MaxPool2d(2, 2)
##Linear layers
# self.fc1 = nn.Linear(256*7*7, 2090)
# self.fc2 = nn.Linear(2090, 2090)
# self.fc3 = nn.Linear(2090, 133)
##Dropout layer
# self.dropout = nn.Dropout(0.25)
# =============================================
self.conv1 = nn.Conv2d(3, 64, 11, 4,padding = 2)
self.conv2 = nn.Conv2d(64, 192, 5, padding = 2)
self.conv3 = nn.Conv2d(192, 384, 3, padding = 1)
self.conv4 = nn.Conv2d(384, 256, 3, padding = 1)
self.conv5 = nn.Conv2d(256, 256, 3, padding = 1)
self.pool = nn.MaxPool2d(3, stride=2, padding=0)
self.fc1 = nn.Linear(256*6*6, 4096)
self.fc2 = nn.Linear(4096, 4096)
self.fc3 = nn.Linear(4096, 133)
self.dropout = nn.Dropout(0.5)
def forward(self, x):
## Define forward behavior
# x = self.pool(F.relu(self.conv1(x)))
# x = self.pool(F.relu(self.conv2(x)))
# x = self.pool(F.relu(self.conv3(x)))
# x = F.relu(self.conv4(x))
# x = self.pool(F.relu(self.conv5(x)))
# x = F.relu(self.conv6(x))
# x = self.pool(F.relu(self.conv7(x)))
# x = x.view(-1, 256*7*7)
# x = self.dropout(F.relu(self.fc1(x)))
# x = self.dropout(F.relu(self.fc2(x)))
# x = self.fc3(x)
# =============================================
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = F.relu(self.conv3(x))
x = F.relu(self.conv4(x))
x = self.pool(F.relu(self.conv5(x)))
x = x.view(-1, 256*6*6)
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.fc3(x)
return x
#-#-# You so NOT have to modify the code below this line. #-#-#
# instantiate the CNN
model_scratch = Net()
# move tensors to GPU if CUDA is available
if use_cuda:
model_scratch.cuda()
print(model_scratch)
###Output
Net(
(conv1): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2))
(conv2): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(conv3): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=9216, out_features=4096, bias=True)
(fc2): Linear(in_features=4096, out_features=4096, bias=True)
(fc3): Linear(in_features=4096, out_features=133, bias=True)
(dropout): Dropout(p=0.5, inplace=False)
)
###Markdown
__问题 4:__列出获得最终 CNN 结构的步骤以及每步的推理过程。 __答案:__ (实现)指定损失函数和优化器在下个代码单元格中指定[损失函数](http://pytorch.org/docs/stable/nn.htmlloss-functions)和[优化器](http://pytorch.org/docs/stable/optim.html)。在下面将所选的损失函数另存为 `criterion_scratch`,并将优化器另存为 `optimizer_scratch`。
###Code
import torch.optim as optim
### TODO: select loss function
criterion_scratch = nn.CrossEntropyLoss()
### TODO: select optimizer
# optimizer_scratch = optim.SGD(model_scratch.parameters(), lr=0.001, momentum=0.1)#0.001
optimizer_scratch = optim.Adam(model_scratch.parameters(), lr=0.01)#0.001
# 传入优化器让学习率受其管理,当连续200次没有减少loss时就会减少学习率(乘以0.8)
# scheduler = ReduceLROnPlateau(optimizer_scratch, mode="min", patience=200, factor=0.8)#1
# 每跑800个step就把学习率乘以0.9
# scheduler = StepLR(optimizer, step_size=800, gamma=0.9)#2
###Output
_____no_output_____
###Markdown
(实现)训练和验证模型在以下代码单元格中训练和验证模型。[将最终模型参数](http://pytorch.org/docs/master/notes/serialization.html)保存到以下文件路径:`'model_scratch.pt'`。
###Code
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
"""returns trained model"""
# initialize tracker for minimum validation loss
valid_loss_min = np.Inf
for epoch in range(1, n_epochs+1):
# initialize variables to monitor training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(loaders['train']):
# print("train batch_idx:{}".format(batch_idx))
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## find the loss and update the model parameters accordingly
## record the average training loss, using something like
## train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
######################
# validate the model #
######################
with torch.no_grad():
model.eval()
for batch_idx, (data, target) in enumerate(loaders['valid']):
# print("valid batch_idx:{}".format(batch_idx))
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## update the average validation loss
output = model(data)
loss = criterion(output, target)
valid_loss = valid_loss + ((1 / (batch_idx + 1)) * (loss.data - valid_loss))
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch,
train_loss,
valid_loss
))
## TODO: save the model if validation loss has decreased
if valid_loss < valid_loss_min :
print("Validation loss decreased...")
torch.save(model.state_dict(), save_path)
valid_loss_min = valid_loss
# return trained model
return model
# train the model
model_scratch = train(200, loaders_scratch, model_scratch, optimizer_scratch,
criterion_scratch, use_cuda, 'model_scratch.pt')
# load the model that got the best validation accuracy
model_scratch.load_state_dict(torch.load('model_scratch.pt'))
###Output
_____no_output_____
###Markdown
(实现)测试模型在小狗图像测试数据集上尝试模型。在以下代码单元格中计算并输出测试损失和准确率。确保测试准确率高于 10%。
###Code
def test(loaders, model, criterion, use_cuda):
# monitor test loss and accuracy
test_loss = 0.
correct = 0.
total = 0.
model.eval()
for batch_idx, (data, target) in enumerate(loaders['test']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update average test loss
test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss))
# convert output probabilities to predicted class
pred = output.data.max(1, keepdim=True)[1]
# compare predictions to true label
correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
total += data.size(0)
print('Test Loss: {:.6f}\n'.format(test_loss))
print('\nTest Accuracy: %2d%% (%2d/%2d)' % (
100. * correct / total, correct, total))
# call test function
test(loaders_scratch, model_scratch, criterion_scratch, use_cuda)
###Output
Test Loss: 4.854946
Test Accuracy: 1% (10/836)
###Markdown
--- 第 4 步:(使用迁移学习)创建分类小狗品种的 CNN现在你将使用迁移学习创建能够识别图中小狗品种的 CNN。你的 CNN 必须在测试集上至少达到 60% 的准确率。 (实现)为小狗数据集指定数据加载器在以下代码单元格中编写三个独立的[数据加载器](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader),用于训练、验证和测试小狗图像数据集(分别位于 `dogImages/train`、`dogImages/valid` 和 `dogImages/test` 下)。 **你也可以使用在从头开始创建 CNN 这一步时创建的同一数据加载器**。
###Code
## TODO: Specify data loaders
loaders_transfer = {'train': train_loader, 'valid': valid_loader, 'test': test_loader}
data_transfer = {'train': train_data, 'valid': valid_data, 'test': test_data}
###Output
_____no_output_____
###Markdown
(实现)模型架构使用迁移学习创建分类小狗品种的 CNN。在以下代码单元格中填写代码并将初始化的模型另存为变量 `model_transfer`。
###Code
import torchvision.models as models
import torch.nn as nn
## TODO: Specify model architecture
model_transfer = models.vgg16(pretrained=True)
# Freeze training for all "features" layers
for param in model_transfer.features.parameters():
param.requires_grad = False
n_inputs = model_transfer.classifier[6].in_features
# add last linear layer
# new layers automatically have requires_grad = True
last_layer = nn.Linear(n_inputs, 133)
model_transfer.classifier[6] = last_layer
# if GPU is available, move the model to GPU
if use_cuda:
model_transfer.cuda()
# if use_cuda:
# model_transfer = model_transfer.cuda()
print(model_transfer)
###Output
VGG(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace=True)
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU(inplace=True)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU(inplace=True)
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace=True)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU(inplace=True)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU(inplace=True)
(16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU(inplace=True)
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU(inplace=True)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU(inplace=True)
(23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU(inplace=True)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU(inplace=True)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU(inplace=True)
(30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
(classifier): Sequential(
(0): Linear(in_features=25088, out_features=4096, bias=True)
(1): ReLU(inplace=True)
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=4096, out_features=4096, bias=True)
(4): ReLU(inplace=True)
(5): Dropout(p=0.5, inplace=False)
(6): Linear(in_features=4096, out_features=133, bias=True)
)
)
###Markdown
__问题 5:__列出获得最终 CNN 结构的步骤以及每步的推理过程。解释为何该结构适合解决手头的问题。__答案:__ (实现)指定损失函数和优化器在下个代码单元格中指定[损失函数](http://pytorch.org/docs/master/nn.htmlloss-functions)和[优化器](http://pytorch.org/docs/master/optim.html)。在下面将所选的损失函数另存为 `criterion_transfer`,并将优化器另存为 `optimizer_transfer`。
###Code
criterion_transfer = nn.CrossEntropyLoss()
optimizer_transfer = optim.SGD(model_transfer.classifier.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
(实现)训练和验证模型。在以下代码单元格中训练和验证模型。[将最终模型参数](http://pytorch.org/docs/master/notes/serialization.html)保存到以下文件路径:`'model_transfer.pt'`。
###Code
# train the model
n_epochs = 10
model_transfer = train(n_epochs, loaders_transfer, model_transfer, optimizer_transfer, criterion_transfer, use_cuda, 'model_transfer.pt')
# load the model that got the best validation accuracy (uncomment the line below)
model_transfer.load_state_dict(torch.load('model_transfer.pt'))
###Output
_____no_output_____
###Markdown
(实现)测试模型在小狗图像测试数据集上尝试模型。在以下代码单元格中计算并输出测试损失和准确率。确保测试准确率高于 60%。
###Code
test(loaders_transfer, model_transfer, criterion_transfer, use_cuda)
###Output
Test Loss: 2.212241
Test Accuracy: 42% (353/836)
###Markdown
(实现)使用模型预测小狗品种编写一个函数,它会将图像路径作为输入,并返回模型预测的小狗品种(`Affenpinscher`、`Afghan hound` 等)。
###Code
### TODO: Write a function that takes a path to an image as input
### and returns the dog breed that is predicted by the model.
# list of class names by index, i.e. a name can be accessed like class_names[0]
class_names = [item[4:].replace("_", " ") for item in data_transfer['train'].classes]
def predict_breed_transfer(img_path):
# load the image and return the predicted breed
model_transfer.eval()
img = Image.open(img_path)
img_tensor = img_transforms(img).float()
img_tensor = img_tensor.unsqueeze_(0)
if use_cuda:
img_tensor = img_tensor.cuda()
output = model_transfer(img_tensor)
_, pred = torch.max(output, 1)
idx = pred.item()
return class_names[idx]
###Output
_____no_output_____
###Markdown
--- 第 5 步:编写算法编写一个算法,它会将图像的文件路径作为输入,并首先判断图像中是否包含人脸、小狗,或二者都不含。然后,- 如果在图像中检测到了__小狗__,则返回预测的品种。- 如果在图像中检测到了__人脸__,则返回相似的小狗品种。- 如果二者都没检测到,则输出错误消息。你可以自己编写从图像中检测人脸和小狗的函数,当然也可以使用上面开发的 `face_detector` 和 `human_detector` 函数。你必须使用在第 4 步创建的 CNN 预测小狗品种。 下面提供了一些示例算法输出,但是你也可以自己设计用户体验。 (实现)编写算法
###Code
### TODO: Write your algorithm.
### Feel free to use as many code cells as needed.
def show_detect_result_info(is_human, img_path):
dog_class_name = predict_breed_transfer(img_path)
print("Hello, human!" if is_human else "It's a [{}]".format(dog_class_name))
img = cv2.imread(img_path)
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(cv_rgb)
plt.show()
if is_human:
print("You look like a ...{}".format(dog_class_name))
print("\n")
def run_app(img_path):
## handle cases for a human face, dog, and neither
if face_detector(img_path):
show_detect_result_info(True, img_path)
elif dog_detector(img_path):
show_detect_result_info(False, img_path)
else:
print("Oops, No faces or dogs detected...\n")
###Output
_____no_output_____
###Markdown
--- 第 6 步:测试算法在此部分测试新算法啦。算法认为看起来像哪种小狗?如果你有一只狗,算法能准确预测出小狗的品种吗?如果你有一只猫,算法会错误地认为这只猫是小狗吗? (实现)在样本图像上测试算法。至少在计算机上用 6 张图像测试你的算法。你可以使用任何图像。至少测试两张人脸图像和两张小狗图像。 __问题 6:__结果比你预期的要好吗 :)?还是更糟糕 :(?请对你的算法提出至少三个值得改进的地方。__答案:__(三个值得改进的地方)
###Code
## TODO: Execute your algorithm from Step 6 on
## at least 6 images on your computer.
## Feel free to use as many code cells as needed.
## suggested code, below
for file in np.hstack((human_files[:3], dog_files[:3])):
run_app(file)
###Output
Hello, human!
|
animals/chordates/fish/.ipynb_checkpoints/fish_biomass_estimate-checkpoint.ipynb | ###Markdown
Estimating the biomass of livestockTo estimate the biomass of fish, we first estimate the total biomass of mesopelagic fish, and then add to this estimate the estmimate for the non-mesopelagic fish made by [Wilson et al.](http://dx.doi.org/10.1126/science.1157972). In order to estimate the biomass of mesopelagic fish, we rely on two independent methods - and estimate based on trawling by [Lam & Pauly](http://www.seaaroundus.org/doc/Researcher+Publications/dpauly/PDF/2005/OtherItems/MappingGlobalBiomassMesopelagicFishes.pdf), and an estimate based on sonar. Sonar-based estimateWe generate the sonar-based estimate relying on data from [Irigoien et al.](http://dx.doi.org/10.1038/ncomms4271) and [Proud et al.](http://dx.doi.org/10.1016/j.cub.2016.11.003).Estimating the biomass of mesopelagic fish using sonar is a two step process. First we use estimates of the global backscatter of mesopelagic fish. This backscatter is converted to an estimate of the global biomass of mesopelagic fish by using estimates for the target strength of a single mesopelagic fish. Total backscatterTo estimate the total backscatter of mesopelagic fish, we rely on [Irigoien et al.](http://dx.doi.org/10.1038/ncomms4271) and [Proud et al.](http://dx.doi.org/10.1016/j.cub.2016.11.003). Irigoien et al. generates several different estimates for the global nautical area scatter of mesopelagic fish. We use the geometric mean of the estimates of Irigoien et al. as one source for estimating the total backscatter of mesopelagic fish. We note that the units used by Irigoien et al. are incorrect, as nautical area scatteing coefficient (NASC) is measured in $\frac{m^2}{nm^2}$, but the order of magnitude of the values estimated by Irigoien et al. implies that they multiplied the NASC by the surface area of the ocean in units of $m^2$. This means that the values reported by Irigoien et al. are in fact in units of $\frac{m^4}{nm^2}$. We convert the values reported by Irigoein et al. from the total scatter to the total backscatter by using the equation: $$global \: backscatter \: [m^2] = \frac{global \: scatter \: [\frac{m^4}{nmi^2}]}{4\pi×\frac{1852^2 m^2}{nmi^2}}$$
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
pd.options.display.float_format = '{:,.1e}'.format
from scipy.stats import gmean
import sys
sys.path.insert(0, '../../../statistics_helper')
from CI_helper import *
# Load scatter data from Irigoien et al.
scatter = pd.read_excel('irigoien_et_al_data.xlsx', 'Total scatter',skiprows=1)
# convert scater to backscatter
scatter['Total backscatter [m^2]'] = scatter['Total sA [m^4 nmi^-2]']/(4*np.pi*1852**2)
scatter['Total sA [m^4 nmi^-2]'] = scatter['Total sA [m^4 nmi^-2]'].astype(float)
scatter
# Calculate the geometric mean of values from Irigoien et al.
irigoien_backscatter = gmean(scatter['Total backscatter [m^2]'])
print('The geometric mean of global backscatter from Irigoien et al. is ≈%.1e m^2' %irigoien_backscatter)
###Output
The geometric mean of global backscatter from Irigoien et al. is ≈1.1e+10 m^2
###Markdown
As our best estimate for the global backscatter of mesopelagic fish, we use the geometric mean of the average value from Irigoien et al. and the value reported in Proud et al.
###Code
# The global backscatter reported by Proud et al.
proud_backscatter = 6.02e9
# Our best estimate
best_backscatter = gmean([irigoien_backscatter,proud_backscatter])
print('Our best estimate for the global backscatter of mesapelagic fish is %.0e m^2' %best_backscatter)
###Output
Our best estimate for the global backscatter of mesapelagic fish is 8e+09 m^2
###Markdown
Target strengthIn order to convert the global backscatter into biomass, we use reported values for the target strength per unit biomass of mesopelagic fish. The target strength is a measure of the the backscattering cross-section in dB, which is defined as $TS = 10 \times log_{10}(\sigma_{bs})$ with units of dB 1 re $m^2$. By measuring the relation between the target strength and biomass of mesopelagic fish, one can calculate the target strength per unit biomass in units of db 1 re $\frac{m^2}{kg}$. We can use the global backscatter to calculate the total biomass of mesopelagic fish based on the equation provided in [MacLennan et al.](https://doi.org/10.1006/jmsc.2001.1158): $$biomass_{fish} \:[kg]= \frac{global \: backscatter \: [m^2]}{10^{\frac{TS_{kg}}{10}} [m^2 kg^{-1}]}$$Where $TS_{kg}$ is the terget strength per kilogram biomass.The main source affecting the target strength of mesopelagic fish is their swimbaldder, as the swimbladder serves as a strong acoustic reflector at the frequencies used to measure the backscattering of mesopelagic fish. Irigoien et al. provide a list of values from the literature of target strength per unit biomass for mesopelagic fish with or without swimbladder. It is clear from the data that the presence or absence of swimbladder segregates the data into two distinct groups:
###Code
# Load terget strength data
ts = pd.read_excel('irigoien_et_al_data.xlsx', 'Target strength')
# Plot the distribution of TS for fish with or without swimbladder
ts[ts['Swimbladder']=='No']['dB kg^-1'].hist(label='No swimbladder', bins=3)
ts[ts['Swimbladder']=='Yes']['dB kg^-1'].hist(label='With swimbladder', bins=3)
plt.legend()
plt.xlabel(r'Target strength per unit biomass dB kg$^{-1}$')
plt.ylabel('Counts')
###Output
_____no_output_____
###Markdown
To estimate the characteristic target strength per unit biomass of mesopelagic fish, we first estiamte the characteristic target strength per unit biomass of fish with or without swimbladder. We assume that fish with and without swimbladder represent an equal portion of the population of mesopelagic fish. We test the uncertainty associated with this assumption in the uncertainty analysis section.
###Code
# Calculate the average TS per kg for fish with and without swimbladder
TS_bin = ts.groupby('Swimbladder').mean()
TS_bin['dB kg^-1']
###Output
_____no_output_____
###Markdown
We use our best estimate for the target strength per unit biomass to estimate the total biomass of mesopelagic fish. We transform the TS to backscattering cross-section, and then calculate the effective population backscattering cross-section based on the assumption that fish with or without swimbladder represent equal portions of the population.
###Code
# The conversion equation from global backscatter and terget strength per unit biomass
biomass_estimator = lambda TS1,TS2,bs,frac: bs/(frac*10**(TS1/10.) + (1.-frac)*10**(TS2/10.))
# Estimate biomass and convert to g C, assuming fish with or without swimbladder are both 50% of the population
sonar_biomass = biomass_estimator(*TS_bin['dB kg^-1'],best_backscatter,frac=0.5)*1000*0.15
print('Our best sonar-based estimate for the biomass of mesopelagic fish is ≈%.1f Gt C' %(sonar_biomass/1e15))
###Output
Our best sonar-based estimate for the biomass of mesopelagic fish is ≈1.8 Gt C
###Markdown
As noted in the Supplementary Information, there are several caveats which might bias the results. We use the geometric mean of estimates based on sonar and earlier estimates based on trawling to generate a robust estimate for the biomass of mesopelagic fish.
###Code
# The estimate of the global biomass of mesopelagic fish based on trawling reported in Lan & Pauly
trawling_biomass = 1.5e14
# Estimate the biomass of mesopelagic fish based on the geometric mean of sonar-based and trawling-based estimates
best_mesopelagic_bioamss = gmean([sonar_biomass,trawling_biomass])
print('Our best estimate for the biomass of mesopelagic fish is ≈%.1f Gt C' %(best_mesopelagic_bioamss/1e15))
###Output
Our best estimate for the biomass of mesopelagic fish is ≈0.5 Gt C
###Markdown
Finally, we add to our estimate of the biomass of mesopelagic fish the estimate of biomass of non-mesopelagic fish made by [Wilson et al.](http://dx.doi.org/10.1126/science.1157972) to generate our estimate for the total biomass of fish.
###Code
# The estimate of non-mesopelagic fish based on Wilson et al.
non_mesopelagic_fish_biomass = 1.5e14
best_estimate = best_mesopelagic_bioamss+non_mesopelagic_fish_biomass
print('Our best estimate for the biomass of fish is ≈%.1f Gt C' %(best_estimate/1e15))
###Output
Our best estimate for the biomass of fish is ≈0.7 Gt C
###Markdown
Uncertainty analysisIn order to assess the uncertainty associated with our estimate for the biomass of fish, we assess the uncertainty associated with the sonar-based estimate of the biomass of mesopelagic fish, as well as for the non-mesopelagic fish biomass. Mesopelagic fish uncertaintyTo quantify the uncertainty associated with our estimate of the biomass of mesopelagic fish, we assess the uncertainty associated with the sonar-based estimate, and the inter-method uncertainty between the sonar-based and trawling-based estimates. We do not assess the uncertainty of the trawling-based estimate as no data regarding the uncertainty of the estimate is available. Sonar-based estimate uncertaintyThe main parameters influencing the uncertainty of the sonar-based estimates are the global backscatter and the characteristic target-strength per unit biomass. We calculate the uncertainty associated with each one of those parameters, and them combine these uncertainties to quantify the uncertainty of the sonar-based estimate. Global BackscatterFor calculating the global backscatter, we rely in two sources of data - Data from Irigoien et al. and data from Proud et al. We survery both the intra-study uncertainty and interstudy uncertainty associated with the global backscatter. Intra-study uncertaintyIrigoien et al. provides several estimates for the global scatter based on several different types of equations characterizing the relationship between primary productivity and the NASC. We calculate the 95% confidence interval of the geometric mean of these different estimates.Proud et al. estimate a global backscatter of 6.02×$10^9$ $m^2$ ± 1.4×$10^9$ $m^2$. We thus use this range as a measure of the intra-study uncertainty in the estimate of Proud et al.
###Code
# Calculate the intra-study uncertainty of Irigoien et al.
irigoien_CI = geo_CI_calc(scatter['Total backscatter [m^2]'])
# Calculate the intra-study uncertainty of Proud et al.
proud_CI = (1.4e9+6.02e9)/6.02e9
print('The intra-study uncertainty of the total backscatter estimate of Irigoien et al. is ≈%.1f-fold' %irigoien_CI)
print('The intra-study uncertainty of the total backscatter estimate of Proud et al. is ≈%.1f-fold' %proud_CI)
###Output
The intra-study uncertainty of the total backscatter estimate of Irigoien et al. is ≈1.1-fold
The intra-study uncertainty of the total backscatter estimate of Proud et al. is ≈1.2-fold
###Markdown
Interstudy uncertaintyAs a measure of the interstudy uncertainty of the global backscatter, we calculate the 95% confidence interval of the geometric mean of the estimate from Irigoien et al. and Proud et al.:
###Code
# Calculate the interstudy uncertainty of the global backscatter
bs_inter_CI = geo_CI_calc([irigoien_backscatter,proud_backscatter])
print('The interstudy uncertainty of the total backscatter is ≈%.1f-fold' %bs_inter_CI)
# Take the highest uncertainty as our best projection of the uncertainty associates with the global backscatter
bs_CI = np.max([irigoien_CI,proud_CI,bs_inter_CI])
###Output
The interstudy uncertainty of the total backscatter is ≈1.7-fold
###Markdown
We use the highest uncertainty among these different kinds of uncertainty measures as our best projection of the uncertainty of the global backscatter, which is ≈1.7-fold. Target strength per unit biomassTo assess the uncertainty associated with the target strength per unit biomass, we calculate the uncertainty in estimating the characteristic target strength per unit biomass of fish with or without swimbladders, adn the uncertainty associated with the fraction of the population that either has or lacks swimbladder Uncertainty of characteristic target strength per unit biomass of fish with or without swimbladderWe calculate the 95% confidence interval of the target strength of fish with or withour swimbladder, and propagate this confidence interval to the total estimate of biomass to assess the uncertainty associated with the estimate of the target strength. We calculated an uncertainty of ≈1.3-fold. associated with te estimate of the target strength per unit biomass of fish.
###Code
# Define the function that will estimate the 95% confidence interval
def CI_groupby(input):
return input['dB kg^-1'].std(ddof=1)/np.sqrt(input['dB kg^-1'].shape[0])
# Group target strength values by the presence of absence of swimbladder
ts_bin = ts.groupby('Swimbladder')
# Calculate sandard error of those values
ts_bin_CI = ts_bin.apply(CI_groupby)
ts_CI = []
# For the target strength of fish with or without swimbladder, sample 1000 times from the distribution
# of target strengths, and calculate the estimate of the total biomass of fish. Then calcualte the 95%
# confidence interval of the resulting distribution as a measure of the uncertainty in the biomass
# estimate resulting from the uncertainty in the target strength
for x, instance in enumerate(ts_bin_CI):
ts_dist = np.random.normal(TS_bin['dB kg^-1'][x],instance,1000)
biomass_dist = biomass_estimator(ts_dist,TS_bin['dB kg^-1'][1-x],best_backscatter,frac=0.5)*1000*0.15
upper_CI = np.percentile(biomass_dist,97.5)/np.mean(biomass_dist)
lower_CI = np.mean(biomass_dist)/np.percentile(biomass_dist,2.5)
ts_CI.append(np.mean([upper_CI,lower_CI]))
# Take the maximum uncertainty of the with or with out swimbladder as our best projection
ts_CI = np.max(ts_CI)
print('Our best projection for the uncertainty associated with the estimate of the target strength per unit biomass is ≈%.1f-fold' %ts_CI)
###Output
Our best projection for the uncertainty associated with the estimate of the target strength per unit biomass is ≈1.3-fold
###Markdown
Uncertainty of the fraction of the population possessing swimbladderAs a measure of the uncertainty associated with the assumption that fish with or without swimbladder contributed similar portions to the total population of mesopelagic fish, we sample different ratios of fish with and without swimbladder, and calculate the biomass estimate for those fractions.
###Code
# Sample different fractions of fish with swimbladder
ratio_range = np.linspace(0,1,1000)
# Estiamte the biomass of mesopelagic fish using the sampled fraction
biomass_ratio_dist = biomass_estimator(*TS_bin['dB kg^-1'],best_backscatter,ratio_range)*1000*0.15/1e15
# Plot the results for all fractions
plt.plot(ratio_range,biomass_ratio_dist)
plt.xlabel('Fraction of the population possessing swimbladder')
plt.ylabel('Biomass estimate [Gt C]')
###Output
_____no_output_____
###Markdown
We take the 95% range of distribution of fraction of fish with swimbladder account and calculate the uncertainty this fraction introduces into the sonar-based estimate of mesopelagic fish biomass. In this range the confidence interval of the biomass estimate is ≈8.7-fold.
###Code
# Calculate the upper and lower bounds of the influence of the fraction of fish with swimbladder on biomass estimate
ratio_upper_CI = (biomass_estimator(*TS_bin['dB kg^-1'],best_backscatter,0.975)*1000*0.15)/sonar_biomass
ratio_lower_CI = sonar_biomass/(biomass_estimator(*TS_bin['dB kg^-1'],best_backscatter,0)*1000*0.15)
ratio_CI = np.max([ratio_upper_CI,ratio_lower_CI])
print('Our best projection for the uncertainty associated with the fraction of fish possessing swimbladder is ≈%.1f-fold' %ratio_CI)
###Output
Our best projection for the uncertainty associated with the fraction of fish possessing swimbladder is ≈8.7-fold
###Markdown
To calculate the total uncertainty associated with the sonar-based estimate, we propagate the uncertainties associated with the total backscatter, the target strength per unit biomass and the fraction of fish with swimbladder.
###Code
sonar_CI = CI_prod_prop(np.array([ratio_CI,ts_CI,bs_CI]))
print('Our best projection for the uncertainty associated with the sonar-based estimate for the biomass of mesopelagic fish is ≈%.1f-fold' %sonar_CI)
###Output
Our best projection for the uncertainty associated with the sonar-based estimate for the biomass of mesopelagic fish is ≈9.4-fold
###Markdown
Inter-method uncertaintyAs a measure of the inter-method uncertainty of our estimate of the biomass of mesopelagic fish, we calculate the 95% confidence interval of the geometric mean of the sonar-based estiamte and the trawling-based estimate.
###Code
meso_inter_CI = geo_CI_calc(np.array([sonar_biomass,trawling_biomass]))
print('Our best projection for the inter method uncertainty associated with estimate of the biomass of mesopelagic fish is ≈%.1f-fold' %meso_inter_CI)
# Take the highest uncertainty as our best projection for the uncertainty associated with the estimate
# of the biomass of mesopelagic fish
meso_CI = np.max([meso_inter_CI,sonar_CI])
###Output
Our best projection for the inter method uncertainty associated with estimate of the biomass of mesopelagic fish is ≈11.3-fold
###Markdown
Comparing our projections for the uncertainty of the sonar-based estimate of the biomass of mesopelagic fish and the inter-method uncertainty, our best projection for the biomass of mesopelagic fish is about one order of magnitude. Non-mesopelagic fish biomass uncertaintyFor estimating the biomass of non-mesopelagic fish, we rely on estimates by Wilson et al., which does not report an uncertainty range for the biomass of non-meso pelagic fish. A later study ([Jennings et al.](https://doi.org/10.1371/journal.pone.0133794), gave an estimate for the total biomass of fish with body weight of 1 g to 1000 kg, based on ecological models. Jenning et al. reports a 90% confidence interval of 0.34-26.12 Gt wet weight, with a median estimate of ≈5 Gt wet weight. We take this range as a crude measure of the uncertainty associated with the estimate of the biomass of non-mesopelagic fish.
###Code
# Calculate the uncertainty of the non-mesopelagic fish biomass
non_meso_CI = np.max([26.12/5,5/0.34])
# Propagate the uncertainties of mesopelagic fish biomass and non-mesopelagic fish biomass to the total estimate
# of fish biomass
mul_CI = CI_sum_prop(estimates=np.array([best_mesopelagic_bioamss,non_mesopelagic_fish_biomass]), mul_CIs=np.array([meso_CI,non_meso_CI]))
print('Our best projection for the uncertainty associated with the estimate of the biomass of fish is ≈%.1f-fold' %mul_CI)
###Output
Our best projection for the uncertainty associated with the estimate of the biomass of fish is ≈8.2-fold
###Markdown
Estimating the biomass of fishTo estimate the biomass of fish, we first estimate the total biomass of mesopelagic fish, and then add to this estimate the estmimate for the non-mesopelagic fish made by [Wilson et al.](http://dx.doi.org/10.1126/science.1157972). In order to estimate the biomass of mesopelagic fish, we rely on two independent methods - and estimate based on trawling by [Lam & Pauly](http://www.seaaroundus.org/doc/Researcher+Publications/dpauly/PDF/2005/OtherItems/MappingGlobalBiomassMesopelagicFishes.pdf), and an estimate based on sonar. Sonar-based estimateWe generate the sonar-based estimate relying on data from [Irigoien et al.](http://dx.doi.org/10.1038/ncomms4271) and [Proud et al.](http://dx.doi.org/10.1016/j.cub.2016.11.003).Estimating the biomass of mesopelagic fish using sonar is a two step process. First we use estimates of the global backscatter of mesopelagic fish. This backscatter is converted to an estimate of the global biomass of mesopelagic fish by using estimates for the target strength of a single mesopelagic fish. Total backscatterTo estimate the total backscatter of mesopelagic fish, we rely on [Irigoien et al.](http://dx.doi.org/10.1038/ncomms4271) and [Proud et al.](http://dx.doi.org/10.1016/j.cub.2016.11.003). Irigoien et al. generates several different estimates for the global nautical area scatter of mesopelagic fish. We use the geometric mean of the estimates of Irigoien et al. as one source for estimating the total backscatter of mesopelagic fish. We note that the units used by Irigoien et al. are incorrect, as nautical area scatteing coefficient (NASC) is measured in $\frac{m^2}{nm^2}$, but the order of magnitude of the values estimated by Irigoien et al. implies that they multiplied the NASC by the surface area of the ocean in units of $m^2$. This means that the values reported by Irigoien et al. are in fact in units of $\frac{m^4}{nm^2}$. We convert the values reported by Irigoein et al. from the total scatter to the total backscatter by using the equation: $$global \: backscatter \: [m^2] = \frac{global \: scatter \: [\frac{m^4}{nmi^2}]}{4\pi×\frac{1852^2 m^2}{nmi^2}}$$
###Code
# Load scatter data from Irigoien et al.
scatter = pd.read_excel('fish_biomass_data.xlsx', 'Total scatter',skiprows=1)
# convert scater to backscatter
scatter['Total backscatter [m^2]'] = scatter['Total sA [m^4 nmi^-2]']/(4*np.pi*1852**2)
scatter['Total sA [m^4 nmi^-2]'] = scatter['Total sA [m^4 nmi^-2]'].astype(float)
scatter
# Calculate the geometric mean of values from Irigoien et al.
irigoien_backscatter = gmean(scatter['Total backscatter [m^2]'])
print('The geometric mean of global backscatter from Irigoien et al. is ≈%.1e m^2' %irigoien_backscatter)
###Output
The geometric mean of global backscatter from Irigoien et al. is ≈1.1e+10 m^2
###Markdown
As our best estimate for the global backscatter of mesopelagic fish, we use the geometric mean of the average value from Irigoien et al. and the value reported in Proud et al.
###Code
# The global backscatter reported by Proud et al.
proud_backscatter = 6.02e9
# Our best estimate
best_backscatter = gmean([irigoien_backscatter,proud_backscatter])
print('Our best estimate for the global backscatter of mesapelagic fish is %.0e m^2' %best_backscatter)
###Output
Our best estimate for the global backscatter of mesapelagic fish is 8e+09 m^2
###Markdown
Target strengthIn order to convert the global backscatter into biomass, we use reported values for the target strength per unit biomass of mesopelagic fish. The target strength is a measure of the the backscattering cross-section in dB, which is defined as $TS = 10 \times log_{10}(\sigma_{bs})$ with units of dB 1 re $m^2$. By measuring the relation between the target strength and biomass of mesopelagic fish, one can calculate the target strength per unit biomass in units of db 1 re $\frac{m^2}{kg}$. We can use the global backscatter to calculate the total biomass of mesopelagic fish based on the equation provided in [MacLennan et al.](https://doi.org/10.1006/jmsc.2001.1158): $$biomass_{fish} \:[kg]= \frac{global \: backscatter \: [m^2]}{10^{\frac{TS_{kg}}{10}} [m^2 kg^{-1}]}$$Where $TS_{kg}$ is the terget strength per kilogram biomass.The main source affecting the target strength of mesopelagic fish is their swimbaldder, as the swimbladder serves as a strong acoustic reflector at the frequencies used to measure the backscattering of mesopelagic fish. Irigoien et al. provide a list of values from the literature of target strength per unit biomass for mesopelagic fish with or without swimbladder. It is clear from the data that the presence or absence of swimbladder segregates the data into two distinct groups:
###Code
# Load terget strength data
ts = pd.read_excel('fish_biomass_data.xlsx', 'Target strength',skiprows=1)
# Plot the distribution of TS for fish with or without swimbladder
ts[ts['Swimbladder']=='No']['dB kg^-1'].hist(label='No swimbladder', bins=3)
ts[ts['Swimbladder']=='Yes']['dB kg^-1'].hist(label='With swimbladder', bins=3)
plt.legend()
plt.xlabel(r'Target strength per unit biomass dB kg$^{-1}$')
plt.ylabel('Counts')
###Output
_____no_output_____
###Markdown
To estimate the characteristic target strength per unit biomass of mesopelagic fish, we first estiamte the characteristic target strength per unit biomass of fish with or without swimbladder. We assume that fish with and without swimbladder represent an equal portion of the population of mesopelagic fish. We test the uncertainty associated with this assumption in the uncertainty analysis section.
###Code
# Calculate the average TS per kg for fish with and without swimbladder
TS_bin = ts.groupby('Swimbladder').mean()
TS_bin['dB kg^-1']
###Output
_____no_output_____
###Markdown
We use our best estimate for the target strength per unit biomass to estimate the total biomass of mesopelagic fish. We transform the TS to backscattering cross-section, and then calculate the effective population backscattering cross-section based on the assumption that fish with or without swimbladder represent equal portions of the population.
###Code
# The conversion equation from global backscatter and terget strength per unit biomass
biomass_estimator = lambda TS1,TS2,bs,frac: bs/(frac*10**(TS1/10.) + (1.-frac)*10**(TS2/10.))
# Estimate biomass and convert to g C, assuming fish with or without swimbladder are both 50% of the population
sonar_biomass = biomass_estimator(*TS_bin['dB kg^-1'],best_backscatter,frac=0.5)*1000*0.15
print('Our best sonar-based estimate for the biomass of mesopelagic fish is ≈%.1f Gt C' %(sonar_biomass/1e15))
###Output
Our best sonar-based estimate for the biomass of mesopelagic fish is ≈1.8 Gt C
###Markdown
As noted in the Supplementary Information, there are several caveats which might bias the results. We use the geometric mean of estimates based on sonar and earlier estimates based on trawling to generate a robust estimate for the biomass of mesopelagic fish.
###Code
# The estimate of the global biomass of mesopelagic fish based on trawling reported in Lan & Pauly
trawling_biomass = 1.5e14
# Estimate the biomass of mesopelagic fish based on the geometric mean of sonar-based and trawling-based estimates
best_mesopelagic_biomass = gmean([sonar_biomass,trawling_biomass])
print('Our best estimate for the biomass of mesopelagic fish is ≈%.1f Gt C' %(best_mesopelagic_biomass/1e15))
###Output
Our best estimate for the biomass of mesopelagic fish is ≈0.5 Gt C
###Markdown
Finally, we add to our estimate of the biomass of mesopelagic fish the estimate of biomass of non-mesopelagic fish made by [Wilson et al.](http://dx.doi.org/10.1126/science.1157972) to generate our estimate for the total biomass of fish.
###Code
# The estimate of non-mesopelagic fish based on Wilson et al.
non_mesopelagic_fish_biomass = 1.5e14
best_estimate = best_mesopelagic_biomass+non_mesopelagic_fish_biomass
print('Our best estimate for the biomass of fish is ≈%.1f Gt C' %(best_estimate/1e15))
###Output
Our best estimate for the biomass of fish is ≈0.7 Gt C
###Markdown
Uncertainty analysisIn order to assess the uncertainty associated with our estimate for the biomass of fish, we assess the uncertainty associated with the sonar-based estimate of the biomass of mesopelagic fish, as well as for the non-mesopelagic fish biomass. Mesopelagic fish uncertaintyTo quantify the uncertainty associated with our estimate of the biomass of mesopelagic fish, we assess the uncertainty associated with the sonar-based estimate, and the inter-method uncertainty between the sonar-based and trawling-based estimates. We do not assess the uncertainty of the trawling-based estimate as no data regarding the uncertainty of the estimate is available. Sonar-based estimate uncertaintyThe main parameters influencing the uncertainty of the sonar-based estimates are the global backscatter and the characteristic target-strength per unit biomass. We calculate the uncertainty associated with each one of those parameters, and them combine these uncertainties to quantify the uncertainty of the sonar-based estimate. Global BackscatterFor calculating the global backscatter, we rely in two sources of data - Data from Irigoien et al. and data from Proud et al. We survery both the intra-study uncertainty and interstudy uncertainty associated with the global backscatter. Intra-study uncertaintyIrigoien et al. provides several estimates for the global scatter based on several different types of equations characterizing the relationship between primary productivity and the NASC. We calculate the 95% confidence interval of the geometric mean of these different estimates.Proud et al. estimate a global backscatter of 6.02×$10^9$ $m^2$ ± 1.4×$10^9$ $m^2$. We thus use this range as a measure of the intra-study uncertainty in the estimate of Proud et al.
###Code
# Calculate the intra-study uncertainty of Irigoien et al.
irigoien_CI = geo_CI_calc(scatter['Total backscatter [m^2]'])
# Calculate the intra-study uncertainty of Proud et al.
proud_CI = (1.4e9+6.02e9)/6.02e9
print('The intra-study uncertainty of the total backscatter estimate of Irigoien et al. is ≈%.1f-fold' %irigoien_CI)
print('The intra-study uncertainty of the total backscatter estimate of Proud et al. is ≈%.1f-fold' %proud_CI)
###Output
The intra-study uncertainty of the total backscatter estimate of Irigoien et al. is ≈1.1-fold
The intra-study uncertainty of the total backscatter estimate of Proud et al. is ≈1.2-fold
###Markdown
Interstudy uncertaintyAs a measure of the interstudy uncertainty of the global backscatter, we calculate the 95% confidence interval of the geometric mean of the estimate from Irigoien et al. and Proud et al.:
###Code
# Calculate the interstudy uncertainty of the global backscatter
bs_inter_CI = geo_CI_calc([irigoien_backscatter,proud_backscatter])
print('The interstudy uncertainty of the total backscatter is ≈%.1f-fold' %bs_inter_CI)
# Take the highest uncertainty as our best projection of the uncertainty associates with the global backscatter
bs_CI = np.max([irigoien_CI,proud_CI,bs_inter_CI])
###Output
The interstudy uncertainty of the total backscatter is ≈1.7-fold
###Markdown
We use the highest uncertainty among these different kinds of uncertainty measures as our best projection of the uncertainty of the global backscatter, which is ≈1.7-fold. Target strength per unit biomassTo assess the uncertainty associated with the target strength per unit biomass, we calculate the uncertainty in estimating the characteristic target strength per unit biomass of fish with or without swimbladders, adn the uncertainty associated with the fraction of the population that either has or lacks swimbladder Uncertainty of characteristic target strength per unit biomass of fish with or without swimbladderWe calculate the 95% confidence interval of the target strength of fish with or withour swimbladder, and propagate this confidence interval to the total estimate of biomass to assess the uncertainty associated with the estimate of the target strength. We calculated an uncertainty of ≈1.3-fold. associated with te estimate of the target strength per unit biomass of fish.
###Code
# Define the function that will estimate the 95% confidence interval
def CI_groupby(input):
return input['dB kg^-1'].std(ddof=1)/np.sqrt(input['dB kg^-1'].shape[0])
# Group target strength values by the presence of absence of swimbladder
ts_bin = ts.groupby('Swimbladder')
# Calculate sandard error of those values
ts_bin_CI = ts_bin.apply(CI_groupby)
ts_CI = []
# For the target strength of fish with or without swimbladder, sample 1000 times from the distribution
# of target strengths, and calculate the estimate of the total biomass of fish. Then calcualte the 95%
# confidence interval of the resulting distribution as a measure of the uncertainty in the biomass
# estimate resulting from the uncertainty in the target strength
for x, instance in enumerate(ts_bin_CI):
ts_dist = np.random.normal(TS_bin['dB kg^-1'][x],instance,1000)
biomass_dist = biomass_estimator(ts_dist,TS_bin['dB kg^-1'][1-x],best_backscatter,frac=0.5)*1000*0.15
upper_CI = np.percentile(biomass_dist,97.5)/np.mean(biomass_dist)
lower_CI = np.mean(biomass_dist)/np.percentile(biomass_dist,2.5)
ts_CI.append(np.mean([upper_CI,lower_CI]))
# Take the maximum uncertainty of the with or with out swimbladder as our best projection
ts_CI = np.max(ts_CI)
print('Our best projection for the uncertainty associated with the estimate of the target strength per unit biomass is ≈%.1f-fold' %ts_CI)
###Output
Our best projection for the uncertainty associated with the estimate of the target strength per unit biomass is ≈1.3-fold
###Markdown
Uncertainty of the fraction of the population possessing swimbladderAs a measure of the uncertainty associated with the assumption that fish with or without swimbladder contributed similar portions to the total population of mesopelagic fish, we sample different ratios of fish with and without swimbladder, and calculate the biomass estimate for those fractions.
###Code
# Sample different fractions of fish with swimbladder
ratio_range = np.linspace(0,1,1000)
# Estiamte the biomass of mesopelagic fish using the sampled fraction
biomass_ratio_dist = biomass_estimator(*TS_bin['dB kg^-1'],best_backscatter,ratio_range)*1000*0.15/1e15
# Plot the results for all fractions
plt.plot(ratio_range,biomass_ratio_dist)
plt.xlabel('Fraction of the population possessing swimbladder')
plt.ylabel('Biomass estimate [Gt C]')
###Output
_____no_output_____
###Markdown
We take the 95% range of distribution of fraction of fish with swimbladder account and calculate the uncertainty this fraction introduces into the sonar-based estimate of mesopelagic fish biomass. In this range the confidence interval of the biomass estimate is ≈8.7-fold.
###Code
# Calculate the upper and lower bounds of the influence of the fraction of fish with swimbladder on biomass estimate
ratio_upper_CI = (biomass_estimator(*TS_bin['dB kg^-1'],best_backscatter,0.975)*1000*0.15)/sonar_biomass
ratio_lower_CI = sonar_biomass/(biomass_estimator(*TS_bin['dB kg^-1'],best_backscatter,0)*1000*0.15)
ratio_CI = np.max([ratio_upper_CI,ratio_lower_CI])
print('Our best projection for the uncertainty associated with the fraction of fish possessing swimbladder is ≈%.1f-fold' %ratio_CI)
###Output
Our best projection for the uncertainty associated with the fraction of fish possessing swimbladder is ≈8.7-fold
###Markdown
To calculate the total uncertainty associated with the sonar-based estimate, we propagate the uncertainties associated with the total backscatter, the target strength per unit biomass and the fraction of fish with swimbladder.
###Code
sonar_CI = CI_prod_prop(np.array([ratio_CI,ts_CI,bs_CI]))
print('Our best projection for the uncertainty associated with the sonar-based estimate for the biomass of mesopelagic fish is ≈%.1f-fold' %sonar_CI)
###Output
Our best projection for the uncertainty associated with the sonar-based estimate for the biomass of mesopelagic fish is ≈9.5-fold
###Markdown
Inter-method uncertaintyAs a measure of the inter-method uncertainty of our estimate of the biomass of mesopelagic fish, we calculate the 95% confidence interval of the geometric mean of the sonar-based estimate and the trawling-based estimate.
###Code
meso_inter_CI = geo_CI_calc(np.array([sonar_biomass,trawling_biomass]))
print('Our best projection for the inter method uncertainty associated with estimate of the biomass of mesopelagic fish is ≈%.1f-fold' %meso_inter_CI)
# Take the highest uncertainty as our best projection for the uncertainty associated with the estimate
# of the biomass of mesopelagic fish
meso_CI = np.max([meso_inter_CI,sonar_CI])
###Output
Our best projection for the inter method uncertainty associated with estimate of the biomass of mesopelagic fish is ≈11.3-fold
###Markdown
Comparing our projections for the uncertainty of the sonar-based estimate of the biomass of mesopelagic fish and the inter-method uncertainty, our best projection for the biomass of mesopelagic fish is about one order of magnitude. Non-mesopelagic fish biomass uncertaintyFor estimating the biomass of non-mesopelagic fish, we rely on estimates by Wilson et al., which does not report an uncertainty range for the biomass of non-meso pelagic fish. A later study ([Jennings et al.](https://doi.org/10.1371/journal.pone.0133794), gave an estimate for the total biomass of fish with body weight of 1 g to 1000 kg, based on ecological models. Jenning et al. reports a 90% confidence interval of 0.34-26.12 Gt wet weight, with a median estimate of ≈5 Gt wet weight. We take this range as a crude measure of the uncertainty associated with the estimate of the biomass of non-mesopelagic fish.
###Code
# Calculate the uncertainty of the non-mesopelagic fish biomass
non_meso_CI = np.max([26.12/5,5/0.34])
# Propagate the uncertainties of mesopelagic fish biomass and non-mesopelagic fish biomass to the total estimate
# of fish biomass
mul_CI = CI_sum_prop(estimates=np.array([best_mesopelagic_biomass,non_mesopelagic_fish_biomass]), mul_CIs=np.array([meso_CI,non_meso_CI]))
print('Our best projection for the uncertainty associated with the estimate of the biomass of fish is ≈%.1f-fold' %mul_CI)
###Output
Our best projection for the uncertainty associated with the estimate of the biomass of fish is ≈8.3-fold
###Markdown
Prehuman fish biomassTo estimate the prehuman fish biomass, we rely on a study ([Costello et al.](http://dx.doi.org/10.1073/pnas.1520420113)) which states that fish stocks in global fisheries are 1.17 of the Maximal Sustainable Yield biomass, when looking at all fisheries and calculating a catch-weighted average global fishery (Figure S12 in the SI Appendix of Costello et al.). Costello et al. also reports the total biomass of present day fisheries at 0.84 Gt wet weight (Table S15 in the SI Appendix of Costello et al.). Assuming 70% water content and 50% carbon content out of wet weight, this translates to:
###Code
costello_ww = 0.84
wet_to_c = 0.3*0.5
costello_cc = costello_ww*wet_to_c
print('Costello et al. estimate ≈%.2f Gt C of current fisheries' %costello_cc)
###Output
Costello et al. estimate ≈0.13 Gt C of current fisheries
###Markdown
This number is close to the number reported by Wilson et al. Using a database of published landings data and stock assessment biomass estimates, [Thorson et al.](http://dx.doi.org/10.1139/f2012-077) estimate that the biomass of fish at the maximum sustainable yield represent ≈40% of the biomass the population would have reached in case of no fishing. From these two numbers, we can estimate the prehuamn biomass of fish in fisheries. We use the total biomass of fisheries reported in Costello et al., divide it the bte ratio reported in Costello et al. to estimate the Maximal Sustainable Yield biomass, and then divide this number by 0.4 to arrive at the prehuman biomass of fish in fisheries. We add to this estimate the estimate of the total biomass of mesopelagic fish, assuming their biomass wasn't affected by humans.
###Code
costello_ratio = 1.17
thorson_ratio = 0.4
prehuman_biomass_fisheries = costello_cc*1e15/costello_ratio/thorson_ratio
prehuman_fish_biomass = prehuman_biomass_fisheries+best_mesopelagic_biomass
print('Our estimate for the total prehuman biomass of fish is ≈%.1f Gt C' %(prehuman_fish_biomass/1e15))
###Output
Our estimate for the total prehuman biomass of fish is ≈0.8 Gt C
###Markdown
Comparing the prehuman fish biomass to the present day fish biomass, we can estimate the human associated reduction in fish biomass:
###Code
fish_biomass_decrease = prehuman_fish_biomass-best_estimate
print('Our estimate for the decrease in the total biomass of fish is ≈%.2f Gt C' %(fish_biomass_decrease/1e15))
###Output
Our estimate for the decrease in the total biomass of fish is ≈0.12 Gt C
###Markdown
Which means that, based on the assumptions in our calculation, the decrease in the total biomass of fish is about the same as the remaining total mass of fish in all fisheries (disregarding mesopalegic fish). Estimating the total number of fishTo estimate the total number of fish, we divide our estimate of the total biomass of mesopelagic fish by an estimate for the characteristic carbon content of a single mesopelagic fish. To estimate the mean weight of mesopelagic fish, we rely on data reported in [Fock & Ehrich](https://doi.org/10.1111/j.1439-0426.2010.01450.x) for the family Myctophidae (Lanternfish), which dominate the mesopelagic fish species. Fock & Ehrich report the length range of each fish species, as well as allometric relations between fish length and weight for each species. Here is a sample of the data:
###Code
# Load the data from Fock & Ehrich
fe_data = pd.read_excel('fish_biomass_data.xlsx','Fock & Ehrich', skiprows=1)
# Use only data for the Myctophidae family
fe_mycto = fe_data[fe_data['Family'] == 'Myctophidae']
fe_mycto.head()
###Output
_____no_output_____
###Markdown
The allometric parameters a and b are plugged into the following equation to produce the weight of each fish species based on the length of each fish: $$ W = a*L^b$$Where W is the fish weight and L is the fish length. For each fish species, we calculate the characteristic fish length by using the mean of the minimum and maximum reported fish lengths:
###Code
fe_mean_length = np.mean([fe_mycto['Maximum length (mm)'].astype(float),fe_mycto['Minimum length (mm)'].astype(float)])
###Output
_____no_output_____
###Markdown
We plug the mean length of each fish species into the allometric equation along with its specific parameters a and b to generate the mean wet weight of each fish. We use the geometric mean of the weights of all species as our best estimate of the weight of a single mesopelagic fish. We convert wet weight to carbon mass assuming 70% water content and 50% carbon our of the dry weight.
###Code
# The allometric equation to convert fish length into fish weight. The equation takes values
# in cm and the data is given in mm so we divide the length by a factor of 10
calculate_weight = lambda x,a,b: a*(x/10)**b
# Transform the mean lengths of each fish species into a characteristic weight of each fish species
fe_mean_weight = calculate_weight(fe_mean_length,fe_mycto['a(SL)'],fe_mycto['b(SL)'])
# Conversion factor from wet weight to carbon mass
wet_to_c = 0.15
# Calculate the mean carbon content of a single mesopelagic fish
fish_cc = gmean(fe_mean_weight.astype(float))*wet_to_c
print('Our best estimate for the carbon content of a single mesopelagic fish is ≈%.2f g C' %fish_cc)
###Output
Our best estimate for the carbon content of a single mesopelagic fish is ≈0.46 g C
###Markdown
We estimate the total number of mesopelagic fish by dividing our best estimate for the total biomass of mesopelagic fish by our estimate for the carbon content of a single mesopelagic fish:
###Code
# Estimate the total number of fish
tot_fish_num = best_mesopelagic_biomass/fish_cc
print('Our best estimate for the total number of individual fish is ≈%.0e.' %tot_fish_num)
# Feed results to the chordate biomass data
old_results = pd.read_excel('../../animal_biomass_estimate.xlsx',index_col=0)
result = old_results.copy()
result.loc['Fish',(['Biomass [Gt C]','Uncertainty'])] = (best_estimate/1e15,mul_CI)
result.to_excel('../../animal_biomass_estimate.xlsx')
# Feed results to Table 1 & Fig. 1
update_results(sheet='Table1 & Fig1',
row=('Animals','Fish'),
col=['Biomass [Gt C]', 'Uncertainty'],
values=[best_estimate/1e15,mul_CI],
path='../../../results.xlsx')
# Feed results to Table S1
update_results(sheet='Table S1',
row=('Animals','Fish'),
col=['Number of individuals'],
values=tot_fish_num,
path='../../../results.xlsx')
# Update the data mentioned in the MS
update_MS_data(row ='Decrease in biomass of fish',
values=fish_biomass_decrease/1e15,
path='../../../results.xlsx')
###Output
_____no_output_____ |
SVM(Support Vector Machine).ipynb | ###Markdown
from sklearn.svm import SVC
###Code
ytest
model = SVC()
model.fit(xtrain,ytrain)
model.score(xtest,ytest)
###Output
_____no_output_____ |
docs/beta/notebooks/Slicer.ipynb | ###Markdown
Tracking Failure OriginsThe question of "Where does this value come from?" is fundamental for debugging. Which earlier variables could possibly have influenced the current erroneous state? And how did their values come to be?When programmers read code during debugging, they scan it for potential _origins_ of given values. This can be a tedious experience, notably, if the origins spread across multiple separate locations, possibly even in different modules. In this chapter, we thus investigate means to _determine such origins_ automatically – by collecting data and control dependencies during program execution.
###Code
from bookutils import YouTubeVideo
YouTubeVideo("sjf3cOR0lcI")
###Output
_____no_output_____
###Markdown
**Prerequisites*** You should have read the [Introduction to Debugging](Intro_Debugging).* To understand how to compute dependencies automatically (the second half of this chapter), you will need * advanced knowledge of Python semantics * knowledge on how to instrument and transform code * knowledge on how an interpreter works
###Code
import bookutils
from bookutils import quiz, next_inputs, print_content
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.Slicer import ```and then make use of the following features.This chapter provides a `Slicer` class to automatically determine and visualize dynamic dependencies. When we say that a variable $x$ depends on a variable $y$ (written $x \leftarrow y$), we distinguish two kinds of dependencies:* **data dependencies**: $x$ obtains its value from a computation involving the value of $y$.* **control dependencies**: $x$ obtains its value because of a computation involving the value of $y$.Such dependencies are crucial for debugging, as they allow to determine the origins of individual values (and notably incorrect values).To determine dynamic dependencies in a function `func` and its callees `func1`, `func2`, etc., use```pythonwith Slicer(func, func1, func2) as slicer: ```and then `slicer.graph()` or `slicer.code()` to examine dependencies.Here is an example. The `demo()` function computes some number from `x`:```python>>> def demo(x: int) -> int:>>> z = x>>> while x <= z <= 64:>>> z *= 2>>> return z```By using `with Slicer(demo)`, we first instrument `demo()` and then execute it:```python>>> with Slicer(demo) as slicer:>>> demo(10)```After execution is complete, you can output `slicer` to visualize the dependencies as graph. Data dependencies are shown as black solid edges; control dependencies are shown as grey dashed edges. We see how the parameter `x` flows into `z`, which is returned after some computation that is control dependent on a `` involving `z`.```python>>> slicer```An alternate representation is `slicer.code()`, annotating the instrumented source code with (backward) dependencies. Data dependencies are shown with `<=`, control dependencies with `<-`; locations (lines) are shown in parentheses.```python>>> slicer.code()* 1 def demo(x: int) -> int:* 2 z = x <= x (1)* 3 while x <= z <= 64: <= z (4), z (2), x (1)* 4 z *= 2 (3)* 5 return z <= z (4)```Dependencies can also be retrieved programmatically. The `dependencies()` method returns a `Dependencies` object encapsulating the dependency graph.The method `all_vars()` returns all variables in the dependency graph. Each variable is encoded as a pair (_name_, _location_) where _location_ is a pair (_codename_, _lineno_).```python>>> slicer.dependencies().all_vars(){('', ( int>, 5)), ('', ( int>, 3)), ('x', ( int>, 1)), ('z', ( int>, 2)), ('z', ( int>, 4))}````code()` and `graph()` methods can also be applied on dependencies. The method `backward_slice(var)` returns a backward slice for the given variable. To retrieve where `z` in Line 2 came from, use:```python>>> _, start_demo = inspect.getsourcelines(demo)>>> start_demo1>>> slicer.dependencies().backward_slice(('z', (demo, start_demo + 1))).graph() type: ignore```Here are the classes defined in this chapter. A `Slicer` instruments a program, using a `DependencyTracker` at run time to collect `Dependencies`.\todo{Use slices to enforce (lack of) specific information flows}\todo{Use slices in statistical debugging} DependenciesIn the [Introduction to debugging](Intro_Debugging.ipynb), we have seen how faults in a program state propagate to eventually become visible as failures. This induces a debugging strategy called _tracking origins_: 1. We start with a single faulty state _f_ – the failure2. We determine f's _origins_ – the parts of earlier states that could have caused the faulty state _f_3. For each of these origins _e_, we determine whether they are faulty or not4. For each of the faulty origins, we in turn determine _their_ origins.5. If we find a part of the state that is faulty, yet has only correct origins, we have found the defect. In all generality, a "part of the state" can be anything that can influence the program – some configuration setting, some database content, or the state of a device. Almost always, though, it is through _individual variables_ that a part of the state manifests itself.The good news is that variables do not take arbitrary values at arbitrary times – instead, they are set and accessed at precise moments in time, as determined by the program's semantics. This allows us to determine their _origins_ by reading program code. Let us assume you have a piece of code that reads as follows. The `middle()` function is supposed to return the "middle" number of three values `x`, `y`, and `z` – that is, the one number that neither is the minimum nor the maximum.
###Code
def middle(x, y, z): # type: ignore
if y < z:
if x < y:
return y
elif x < z:
return y
else:
if x > y:
return y
elif x > z:
return x
return z
###Output
_____no_output_____
###Markdown
In most cases, `middle()` runs just fine:
###Code
m = middle(1, 2, 3)
m
###Output
_____no_output_____
###Markdown
In others, however, it returns the wrong value:
###Code
m = middle(2, 1, 3)
m
###Output
_____no_output_____
###Markdown
This is a typical debugging situation: You see a value that is erroneous; and you want to find out where it came from. * In our case, we see that the erroneous value was returned from `middle()`, so we identify the five `return` statements in `middle()` that the value could have come from.* The value returned is the value of `y`, and neither `x`, `y`, nor `z` are altered during the execution of `middle()`. Hence, it must be one of the three `return y` statements that is the origin of `m`. But which one?For our small example, we can fire up an interactive debugger and simply step through the function; this reveals us the conditions evaluated and the `return` statement executed.
###Code
import Debugger # minor dependency
# ignore
next_inputs(["step", "step", "step", "step", "quit"]);
with Debugger.Debugger():
middle(2, 1, 3)
###Output
Calling middle(z = 3, y = 1, x = 2)
###Markdown
We now see that it was the second `return` statement that returned the incorrect value. But why was it executed after all? To this end, we can resort to the `middle()` source code and have a look at those conditions that caused the `return y` statement to be executed. Indeed, the conditions `y y`, and finally `x < z`again are _origins_ of the returned value – and in turn have `x`, `y`, and `z` as origins. In our above reasoning about origins, we have encountered two kinds of origins:* earlier _data values_ (such as the value of `y` being returned) and* earlier _control conditions_ (such as the `if` conditions governing the `return y` statement).The later parts of the state that can be influenced by such origins are said to be _dependent_ on these origins. Speaking of variables, a variable $x$ _depends_ on the value of a variable $y$ (written as $x \leftarrow y$) if a change in $y$ could affect the value of $x$. We distinguish two kinds of dependencies $x \leftarrow y$, aligned with the two kinds of origins as outlined above:* **Data dependency**: $x$ obtains its value from a computation involving the value of $y$. In our example, `m` is data dependent on the return value of `middle()`.* **Control dependency**: $x$ obtains its value because of a computation involving the value of $y$. In our example, the value returned by `return y` is control dependent on the several conditions along its path, which involve `x`, `y`, and `z`. Let us examine these dependencies in more detail. Excursion: Visualizing Dependencies Note: This is an excursion, diverting away from the main flow of the chapter. Unless you know what you are doing, you are encouraged to skip this part. To illustrate our examples, we introduce a `Dependencies` class that captures dependencies between variables at specific locations. A Class for Dependencies `Dependencies` holds two dependency graphs. `data` holds data dependencies, `control` holds control dependencies. Each of the two is organized as a dictionary holding _nodes_ as keys and sets of nodes as values. Each node comes as a tuple```python(variable_name, location) ``` where `variable_name` is a string and `location` is a pair```python(func, lineno) ``` denoting a unique location in the code. This is also reflected in the following type definitions:
###Code
from typing import Set, List, Tuple, Any, Callable, Dict, Optional, Union, Type
from typing import Generator, Generator
Location = Tuple[Callable, int]
Node = Tuple[str, Location]
Dependency = Dict[Node, Set[Node]]
class Dependencies:
"""A dependency graph"""
def __init__(self,
data: Optional[Dependency] = None,
control: Optional[Dependency] = None) -> None:
"""
Create a dependency graph from `data` and `control`.
Both `data` and `control` are dictionaries
holding _nodes_ as keys and sets of nodes as values.
Each node comes as a tuple (variable_name, location)
where `variable_name` is a string
and `location` is a pair (function, lineno)
where `function` is a callable and `lineno` is a line number
denoting a unique location in the code.
"""
if data is None:
data = {}
if control is None:
control = {}
self.data = data
self.control = control
for var in self.data:
self.control.setdefault(var, set())
for var in self.control:
self.data.setdefault(var, set())
self.validate()
def validate(self) -> None:
...
###Output
_____no_output_____
###Markdown
The `validate()` method checks for consistency.
###Code
class Dependencies(Dependencies):
def validate(self) -> None:
"""Check dependency structure."""
assert isinstance(self.data, dict)
assert isinstance(self.control, dict)
for node in (self.data.keys()) | set(self.control.keys()):
var_name, location = node
assert isinstance(var_name, str)
func, lineno = location
assert callable(func)
assert isinstance(lineno, int)
###Output
_____no_output_____
###Markdown
In this chapter, for many purposes, we need to lookup a function's location, source code, or simply definition. The class `StackInspector` provides a number of convenience functions for this purpose. When we access or execute functions, we do so in the caller's environment, not ours. The `caller_globals()` method acts as replacement for `globals()`. The method `caller_frame()` walks up the current call stack and returns the topmost frame invoking a method or function from the current class.
###Code
import inspect
from types import FunctionType, FrameType
from typing import cast
class StackInspector:
"""Provide functions to inspect the stack"""
def caller_frame(self) -> FrameType:
"""Return the frame of the caller."""
# Walk up the call tree until we leave the current class
frame = cast(FrameType, inspect.currentframe())
while ('self' in frame.f_locals and
isinstance(frame.f_locals['self'], self.__class__)):
frame = cast(FrameType, frame.f_back)
return frame
def caller_globals(self) -> Dict[str, Any]:
"""Return the globals() environment of the caller."""
return self.caller_frame().f_globals
def caller_locals(self) -> Dict[str, Any]:
"""Return the locals() environment of the caller."""
return self.caller_frame().f_locals
###Output
_____no_output_____
###Markdown
`caller_location()` returns the caller's function and its location. It does a fair bit of magic to retrieve nested functions, by looking through global and local variables until a match is found. This may be simplified in the future.
###Code
class StackInspector(StackInspector):
def caller_location(self) -> Location:
"""Return the location (func, lineno) of the caller."""
return self.caller_function(), self.caller_frame().f_lineno
def search_frame(self, name: str) -> Tuple[Optional[FrameType], Optional[Callable]]:
"""Return a pair (`frame`, `item`)
in which the function named `name` is defined as `item`."""
frame = self.caller_frame()
while frame:
item = None
if name in frame.f_globals:
item = frame.f_globals[name]
if name in frame.f_locals:
item = frame.f_locals[name]
if item and callable(item):
return frame, item
frame = cast(FrameType, frame.f_back)
return None, None
def search_func(self, name: str) -> Optional[Callable]:
"""Search in callers for a definition of the function `name`"""
frame, func = self.search_frame(name)
return func
def caller_function(self) -> Callable:
"""Return the calling function"""
frame = self.caller_frame()
name = frame.f_code.co_name
func = self.search_func(name)
if func:
return func
if not name.startswith('<'):
warnings.warn(f"Couldn't find {name} in caller")
try:
# Create new function from given code
return FunctionType(frame.f_code,
globals=frame.f_globals,
name=name)
except TypeError:
# Unsuitable code for creating a function
# Last resort: Return some function
return self.unknown
except Exception as exc:
# Any other exception
warnings.warn(f"Couldn't create function for {name} "
f" ({type(exc).__name__}: {exc})")
return self.unknown
def unknown(self) -> None: # Placeholder for unknown functions
pass
###Output
_____no_output_____
###Markdown
We make the `StackInspector` methods available as part of the `Dependencies` class.
###Code
class Dependencies(Dependencies, StackInspector):
pass
###Output
_____no_output_____
###Markdown
The `source()` method returns the source code for a given node.
###Code
import warnings
class Dependencies(Dependencies):
def _source(self, node: Node) -> str:
# Return source line, or ''
(name, location) = node
func, lineno = location
if not func:
# No source
return ''
try:
source_lines, first_lineno = inspect.getsourcelines(func)
except OSError:
warnings.warn(f"Couldn't find source "
f"for {func} ({func.__name__})")
return ''
try:
line = source_lines[lineno - first_lineno].strip()
except IndexError:
return ''
return line
def source(self, node: Node) -> str:
"""Return the source code for a given node."""
line = self._source(node)
if line:
return line
(name, location) = node
func, lineno = location
code_name = func.__name__
if code_name.startswith('<'):
return code_name
else:
return f'<{code_name}()>'
test_deps = Dependencies()
test_deps.source(('z', (middle, 1)))
###Output
_____no_output_____
###Markdown
Drawing Dependencies Both data and control form a graph between nodes, and cam be visualized as such. We use the `graphviz` package for creating such visualizations.
###Code
from graphviz import Digraph, nohtml
###Output
_____no_output_____
###Markdown
`make_graph()` sets the basic graph attributes.
###Code
import html
class Dependencies(Dependencies):
NODE_COLOR = 'peachpuff'
FONT_NAME = 'Fira Mono, Courier, monospace'
def make_graph(self, name: str = "dependencies", comment: str = "Dependencies") -> Digraph:
return Digraph(name=name, comment=comment,
graph_attr={
},
node_attr={
'style': 'filled',
'shape': 'box',
'fillcolor': self.NODE_COLOR,
'fontname': self.FONT_NAME
},
edge_attr={
'fontname': self.FONT_NAME
})
###Output
_____no_output_____
###Markdown
`graph()` returns a graph visualization.
###Code
class Dependencies(Dependencies):
def graph(self) -> Digraph:
"""Draw dependencies."""
self.validate()
g = self.make_graph()
self.draw_dependencies(g)
self.add_hierarchy(g)
return g
def draw_dependencies(self, g: Digraph) -> None:
...
def add_hierarchy(self, g: Digraph) -> Digraph:
...
def _repr_svg_(self) -> Any:
"""If the object is output in Jupyter, render dependencies as a SVG graph"""
return self.graph()._repr_svg_()
###Output
_____no_output_____
###Markdown
The main part of graph drawing takes place in two methods, `draw_dependencies()` and `add_hierarchy()`. `draw_dependencies()` processes through the graph, adding nodes and edges from the dependencies.
###Code
class Dependencies(Dependencies):
def all_vars(self) -> Set[Node]:
"""Return a set of all variables (as `var_name`, `location`) in the dependencies"""
all_vars = set()
for var in self.data:
all_vars.add(var)
for source in self.data[var]:
all_vars.add(source)
for var in self.control:
all_vars.add(var)
for source in self.control[var]:
all_vars.add(source)
return all_vars
class Dependencies(Dependencies):
def draw_dependencies(self, g: Digraph) -> None:
for var in self.all_vars():
g.node(self.id(var),
label=self.label(var),
tooltip=self.tooltip(var))
if var in self.data:
for source in self.data[var]:
g.edge(self.id(source), self.id(var))
if var in self.control:
for source in self.control[var]:
g.edge(self.id(source), self.id(var),
style='dashed', color='grey')
def id(self, var: Node) -> str:
...
def label(self, var: Node) -> str:
...
def tooltip(self, var: Node) -> str:
...
###Output
_____no_output_____
###Markdown
`draw_dependencies()` makes use of a few helper functions.
###Code
class Dependencies(Dependencies):
def id(self, var: Node) -> str:
"""Return a unique ID for `var`."""
id = ""
# Avoid non-identifier characters
for c in repr(var):
if c.isalnum() or c == '_':
id += c
if c == ':' or c == ',':
id += '_'
return id
def label(self, var: Node) -> str:
"""Render node `var` using HTML style."""
(name, location) = var
source = self.source(var)
title = html.escape(name)
if name.startswith('<'):
title = f'<I>{title}</I>'
label = f'<B>{title}</B>'
if source:
label += (f'<FONT POINT-SIZE="9.0"><BR/><BR/>'
f'{html.escape(source)}'
f'</FONT>')
label = f'<{label}>'
return label
def tooltip(self, var: Node) -> str:
"""Return a tooltip for node `var`."""
(name, location) = var
func, lineno = location
return f"{func.__name__}:{lineno}"
###Output
_____no_output_____
###Markdown
In the second part of graph drawing, `add_hierarchy()` adds invisible edges to ensure that nodes with lower line numbers are drawn above nodes with higher line numbers.
###Code
class Dependencies(Dependencies):
def add_hierarchy(self, g: Digraph) -> Digraph:
"""Add invisible edges for a proper hierarchy."""
functions = self.all_functions()
for func in functions:
last_var = None
last_lineno = 0
for (lineno, var) in functions[func]:
if last_var is not None and lineno > last_lineno:
g.edge(self.id(last_var),
self.id(var),
style='invis')
last_var = var
last_lineno = lineno
return g
def all_functions(self) -> Dict[Callable, List[Tuple[int, Node]]]:
...
class Dependencies(Dependencies):
def all_functions(self) -> Dict[Callable, List[Tuple[int, Node]]]:
"""Return mapping {`function`: [(`lineno`, `var`), (`lineno`, `var`), ...], ...}
for all functions in the dependencies."""
functions: Dict[Callable, List[Tuple[int, Node]]] = {}
for var in self.all_vars():
(name, location) = var
func, lineno = location
if func not in functions:
functions[func] = []
functions[func].append((lineno, var))
for func in functions:
functions[func].sort()
return functions
###Output
_____no_output_____
###Markdown
Here comes the graph in all its glory:
###Code
def middle_deps() -> Dependencies:
return Dependencies({('z', (middle, 1)): set(), ('y', (middle, 1)): set(), ('x', (middle, 1)): set(), ('<test>', (middle, 2)): {('y', (middle, 1)), ('z', (middle, 1))}, ('<test>', (middle, 3)): {('y', (middle, 1)), ('x', (middle, 1))}, ('<test>', (middle, 5)): {('z', (middle, 1)), ('x', (middle, 1))}, ('<middle() return value>', (middle, 6)): {('y', (middle, 1))}}, {('z', (middle, 1)): set(), ('y', (middle, 1)): set(), ('x', (middle, 1)): set(), ('<test>', (middle, 2)): set(), ('<test>', (middle, 3)): {('<test>', (middle, 2))}, ('<test>', (middle, 5)): {('<test>', (middle, 3))}, ('<middle() return value>', (middle, 6)): {('<test>', (middle, 5))}})
middle_deps()
###Output
_____no_output_____
###Markdown
SlicesThe method `backward_slice(*critera, mode='cd')` returns a subset of dependencies, following dependencies backward from the given *slicing criteria* `criteria`. These criteria can be* variable names (such as ``); or* `(function, lineno)` pairs (such as `(middle, 3)`); or* `(var_name, (function, lineno))` (such as `(`x`, (middle, 1))`) locations.The extra parameter `mode` controls which dependencies are to be followed:* **`d`** = data dependencies* **`c`** = control dependencies
###Code
Criterion = Union[str, Location, Node]
class Dependencies(Dependencies):
def expand_criteria(self, criteria: List[Criterion]) -> List[Node]:
"""Return list of vars matched by `criteria`."""
all_vars = []
for criterion in criteria:
criterion_var = None
criterion_func = None
criterion_lineno = None
if isinstance(criterion, str):
criterion_var = criterion
elif len(criterion) == 2 and callable(criterion[0]):
criterion_func, criterion_lineno = criterion
elif len(criterion) == 2 and isinstance(criterion[0], str):
criterion_var = criterion[0]
criterion_func, criterion_lineno = criterion[1]
else:
raise ValueError("Invalid argument")
for var in self.all_vars():
(var_name, location) = var
func, lineno = location
name_matches = (criterion_func is None or
criterion_func == func or
criterion_func.__name__ == func.__name__)
location_matches = (criterion_lineno is None or
criterion_lineno == lineno)
var_matches = (criterion_var is None or
criterion_var == var_name)
if name_matches and location_matches and var_matches:
all_vars.append(var)
return all_vars
def backward_slice(self, *criteria: Criterion,
mode: str = 'cd', depth: int = -1) -> Dependencies:
"""
Create a backward slice from nodes `criteria`.
`mode` can contain 'c' (draw control dependencies)
and 'd' (draw data dependencies) (default: 'cd')
"""
data = {}
control = {}
queue = self.expand_criteria(criteria) # type: ignore
seen = set()
while len(queue) > 0 and depth != 0:
var = queue[0]
queue = queue[1:]
seen.add(var)
if 'd' in mode:
# Follow data dependencies
data[var] = self.data[var]
for next_var in data[var]:
if next_var not in seen:
queue.append(next_var)
else:
data[var] = set()
if 'c' in mode:
# Follow control dependencies
control[var] = self.control[var]
for next_var in control[var]:
if next_var not in seen:
queue.append(next_var)
else:
control[var] = set()
depth -= 1
return Dependencies(data, control)
###Output
_____no_output_____
###Markdown
End of Excursion Data DependenciesHere is an example of a data dependency in our `middle()` program. The value `y` returned by `middle()` comes from the value `y` as originally passed as argument. We use arrows $x \leftarrow y$ to indicate that a variable $x$ depends on an earlier variable $y$:
###Code
# ignore
middle_deps().backward_slice('<middle() return value>', mode='d') # type: ignore
###Output
_____no_output_____
###Markdown
Here, we can see that the value `y` in the return statement is data dependent on the value of `y` as passed to `middle()`. An alternate interpretation of this graph is a *data flow*: The value of `y` in the upper node _flows_ into the value of `y` in the lower node. Since we consider the values of variables at specific locations in the program, such data dependencies can also be interpreted as dependencies between _statements_ – the above `return` statement thus is data dependent on the initialization of `y` in the upper node. Control DependenciesHere is an example of a control dependency. The execution of the above `return` statement is controlled by the earlier test `x < z`. We use grey dashed lines to indicate control dependencies:
###Code
# ignore
middle_deps().backward_slice('<middle() return value>', mode='c', depth=1) # type: ignore
###Output
_____no_output_____
###Markdown
This test in turn is controlled by earlier tests, so the full chain of control dependencies looks like this:
###Code
# ignore
middle_deps().backward_slice('<middle() return value>', mode='c') # type: ignore
###Output
_____no_output_____
###Markdown
Dependency GraphsAs the above `` values (and their statements) are in turn also dependent on earlier data, namely the `x`, `y`, and `z` values as originally passed. We can draw all data and control dependencies in a single graph, called a _program dependency graph_:
###Code
# ignore
middle_deps()
###Output
_____no_output_____
###Markdown
This graph now gives us an idea on how to proceed to track the origins of the `middle()` return value at the bottom. Its value can come from any of the origins – namely the initialization of `y` at the function call, or from the `` that controls it. This test in turn depends on `x` and `z` and their associated statements, which we now can check one after the other. Note that all these dependencies in the graph are _dynamic_ dependencies – that is, they refer to statements actually evaluated in the run at hand, as well as the decisions made in that very run. There also are _static_ dependency graphs coming from static analysis of the code; but for debugging, _dynamic_ dependencies specific to the failing run are more useful. Showing Dependencies with CodeWhile a graph gives us a representation about which possible data and control flows to track, integrating dependencies with actual program code results in a compact representation that is easy to reason about. Excursion: Listing Dependencies To show dependencies as text, we introduce a method `format_var()` that shows a single node (a variable) as text. By default, a node is referenced as```pythonNAME (FUNCTION:LINENO)```However, within a given function, it makes no sense to re-state the function name again and again, so we have a shorthand```pythonNAME (LINENO)```to state a dependency to variable `NAME` in line `LINENO`.
###Code
class Dependencies(Dependencies):
def format_var(self, var: Node, current_func: Optional[Callable] = None) -> str:
"""Return string for `var` in `current_func`."""
name, location = var
func, lineno = location
if current_func and (func == current_func or func.__name__ == current_func.__name__):
return f"{name} ({lineno})"
else:
return f"{name} ({func.__name__}:{lineno})"
###Output
_____no_output_____
###Markdown
`format_var()` is used extensively in the `__str__()` string representation of dependencies, listing all nodes and their data (`<=`) and control (`<-`) dependencies.
###Code
class Dependencies(Dependencies):
def __str__(self) -> str:
"""Return string representation of dependencies"""
self.validate()
out = ""
for func in self.all_functions():
code_name = func.__name__
if out != "":
out += "\n"
out += f"{code_name}():\n"
all_vars = list(set(self.data.keys()) | set(self.control.keys()))
all_vars.sort(key=lambda var: var[1][1])
for var in all_vars:
(name, location) = var
var_func, var_lineno = location
var_code_name = var_func.__name__
if var_code_name != code_name:
continue
all_deps = ""
for (source, arrow) in [(self.data, "<="), (self.control, "<-")]:
deps = ""
for data_dep in source[var]:
if deps == "":
deps = f" {arrow} "
else:
deps += ", "
deps += self.format_var(data_dep, func)
if deps != "":
if all_deps != "":
all_deps += ";"
all_deps += deps
if all_deps == "":
continue
out += (" " +
self.format_var(var, func) +
all_deps + "\n")
return out
###Output
_____no_output_____
###Markdown
Here is a compact string representation of dependencies. We see how the (last) `middle() return value` has a data dependency to `y` in Line 1, and to the `` in Line 5.
###Code
print(middle_deps())
###Output
middle():
<test> (2) <= z (1), y (1)
<test> (3) <= x (1), y (1); <- <test> (2)
<test> (5) <= z (1), x (1); <- <test> (3)
<middle() return value> (6) <= y (1); <- <test> (5)
###Markdown
The `__repr__()` method shows a raw form of dependencies, useful for creating dependencies from scratch.
###Code
class Dependencies(Dependencies):
def repr_var(self, var: Node) -> str:
name, location = var
func, lineno = location
return f"({repr(name)}, ({func.__name__}, {lineno}))"
def repr_deps(self, var_set: Set[Node]) -> str:
if len(var_set) == 0:
return "set()"
return ("{" +
", ".join(f"{self.repr_var(var)}"
for var in var_set) +
"}")
def repr_dependencies(self, vars: Dependency) -> str:
return ("{\n " +
",\n ".join(
f"{self.repr_var(var)}: {self.repr_deps(vars[var])}"
for var in vars) +
"}")
def __repr__(self) -> str:
"""Represent dependencies as a Python expression"""
# Useful for saving and restoring values
return (f"Dependencies(\n" +
f" data={self.repr_dependencies(self.data)},\n" +
f" control={self.repr_dependencies(self.control)})")
print(repr(middle_deps()))
###Output
Dependencies(
data={
('z', (middle, 1)): set(),
('y', (middle, 1)): set(),
('x', (middle, 1)): set(),
('<test>', (middle, 2)): {('z', (middle, 1)), ('y', (middle, 1))},
('<test>', (middle, 3)): {('x', (middle, 1)), ('y', (middle, 1))},
('<test>', (middle, 5)): {('z', (middle, 1)), ('x', (middle, 1))},
('<middle() return value>', (middle, 6)): {('y', (middle, 1))}},
control={
('z', (middle, 1)): set(),
('y', (middle, 1)): set(),
('x', (middle, 1)): set(),
('<test>', (middle, 2)): set(),
('<test>', (middle, 3)): {('<test>', (middle, 2))},
('<test>', (middle, 5)): {('<test>', (middle, 3))},
('<middle() return value>', (middle, 6)): {('<test>', (middle, 5))}})
###Markdown
An even more useful representation comes when integrating these dependencies as comments into the code. The method `code(item_1, item_2, ...)` lists the given (function) items, including their dependencies; `code()` lists _all_ functions contained in the dependencies.
###Code
class Dependencies(Dependencies):
def code(self, *items: Callable, mode: str = 'cd') -> None:
"""
List `items` on standard output, including dependencies as comments.
If `items` is empty, all included functions are listed.
`mode` can contain 'c' (draw control dependencies) and 'd' (draw data dependencies)
(default: 'cd').
"""
if len(items) == 0:
items = cast(Tuple[Callable], self.all_functions().keys())
for i, item in enumerate(items):
if i > 0:
print()
self._code(item, mode)
def _code(self, item: Callable, mode: str) -> None:
# The functions in dependencies may be (instrumented) copies
# of the original function. Find the function with the same name.
func = item
for fn in self.all_functions():
if fn == item or fn.__name__ == item.__name__:
func = fn
break
all_vars = self.all_vars()
slice_locations = set(location for (name, location) in all_vars)
source_lines, first_lineno = inspect.getsourcelines(func)
n = first_lineno
for line in source_lines:
line_location = (func, n)
if line_location in slice_locations:
prefix = "* "
else:
prefix = " "
print(f"{prefix}{n:4} ", end="")
comment = ""
for (mode_control, source, arrow) in [
('d', self.data, '<='),
('c', self.control, '<-')
]:
if mode_control not in mode:
continue
deps = ""
for var in source:
name, location = var
if location == line_location:
for dep_var in source[var]:
if deps == "":
deps = arrow + " "
else:
deps += ", "
deps += self.format_var(dep_var, item)
if deps != "":
if comment != "":
comment += "; "
comment += deps
if comment != "":
line = line.rstrip() + " # " + comment
print_content(line.rstrip(), '.py')
print()
n += 1
###Output
_____no_output_____
###Markdown
End of Excursion The following listing shows such an integration. For each executed line (`*`), we see its data (`<=`) and control (`<-`) dependencies, listing the associated variables and line numbers. The comment```python (5)```for Line 6, for instance, states that the return value is data dependent on the value of `y` in Line 1, and control dependent on the test in Line 5.Again, one can easily follow these dependencies back to track where a value came from (data dependencies) and why a statement was executed (control dependency).
###Code
# ignore
middle_deps().code() # type: ignore
###Output
* 1 [34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z): [37m# type: ignore[39;49;00m
* 2 [34mif[39;49;00m y < z: [37m# <= z (1), y (1)[39;49;00m
* 3 [34mif[39;49;00m x < y: [37m# <= x (1), y (1); <- <test> (2)[39;49;00m
4 [34mreturn[39;49;00m y
* 5 [34melif[39;49;00m x < z: [37m# <= z (1), x (1); <- <test> (3)[39;49;00m
* 6 [34mreturn[39;49;00m y [37m# <= y (1); <- <test> (5)[39;49;00m
7 [34melse[39;49;00m:
8 [34mif[39;49;00m x > y:
9 [34mreturn[39;49;00m y
10 [34melif[39;49;00m x > z:
11 [34mreturn[39;49;00m x
12 [34mreturn[39;49;00m z
###Markdown
One important aspect of dependencies is that they not only point to specific sources and causes of failures – but that they also _rule out_ parts of program and state as failures.* In the above code, Lines 8 and later have no influence on the output, simply because they were not executed.* Furthermore, we see that we can start our investigation with Line 6, because that is the last one executed.* The data dependencies tell us that no statement has interfered with the value of `y` between the function call and its return.* Hence, the error must be in the conditions and the final `return` statement.With this in mind, recall that our original invocation was `middle(2, 1, 3)`. Why and how is the above code wrong?
###Code
quiz("Which of the following `middle()` code lines should be fixed?",
[
"Line 2: `if y < z:`",
"Line 3: `if x < y:`",
"Line 5: `elif x < z:`",
"Line 6: `return z`",
], '(1 ** 0 + 1 ** 1) ** (1 ** 2 + 1 ** 3)')
###Output
_____no_output_____
###Markdown
Indeed, from the controlling conditions, we see that `y = y`, and `x < z` all hold. Hence, `y <= x < z` holds, and it is `x`, not `y`, that should be returned. SlicesGiven a dependency graph for a particular variable, we can identify the subset of the program that could have influenced it – the so-called _slice_. In the above code listing, these code locations are highlighted with `*` characters. Only these locations are part of the slice. Slices are central to debugging for two reasons:* First, they _rule out_ those locations of the program that could _not_ have an effect on the failure. Hence, these locations need not be investigated as it comes to searching for the defect. Nor do they need to be considered for a fix, as any change outside of the program slice by construction cannot affect the failure.* Second, they bring together possible origins that may be scattered across the code. Many dependencies in program code are _non-local_, with references to functions, classes, and modules defined in other locations, files, or libraries. A slice brings together all those locations in a single whole. Here is an example of a slice – this time for our well-known `remove_html_markup()` function from [the introduction to debugging](Intro_Debugging.ipynb):
###Code
from Intro_Debugging import remove_html_markup
print_content(inspect.getsource(remove_html_markup), '.py')
###Output
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s): [37m# type: ignore[39;49;00m
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m"[39;49;00m[33m"[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34massert[39;49;00m tag [35mor[39;49;00m [35mnot[39;49;00m quote
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m [35mand[39;49;00m [35mnot[39;49;00m quote:
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m [35mnot[39;49;00m quote:
tag = [34mFalse[39;49;00m
[34melif[39;49;00m (c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m) [35mand[39;49;00m tag:
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mreturn[39;49;00m out
###Markdown
When we invoke `remove_html_markup()` as follows...
###Code
remove_html_markup('<foo>bar</foo>')
###Output
_____no_output_____
###Markdown
... we obtain the following dependencies:
###Code
# ignore
def remove_html_markup_deps() -> Dependencies:
return Dependencies({('s', (remove_html_markup, 136)): set(), ('tag', (remove_html_markup, 137)): set(), ('quote', (remove_html_markup, 138)): set(), ('out', (remove_html_markup, 139)): set(), ('c', (remove_html_markup, 141)): {('s', (remove_html_markup, 136))}, ('<test>', (remove_html_markup, 144)): {('quote', (remove_html_markup, 138)), ('c', (remove_html_markup, 141))}, ('tag', (remove_html_markup, 145)): set(), ('<test>', (remove_html_markup, 146)): {('quote', (remove_html_markup, 138)), ('c', (remove_html_markup, 141))}, ('<test>', (remove_html_markup, 148)): {('c', (remove_html_markup, 141))}, ('<test>', (remove_html_markup, 150)): {('tag', (remove_html_markup, 147)), ('tag', (remove_html_markup, 145))}, ('tag', (remove_html_markup, 147)): set(), ('out', (remove_html_markup, 151)): {('out', (remove_html_markup, 151)), ('c', (remove_html_markup, 141)), ('out', (remove_html_markup, 139))}, ('<remove_html_markup() return value>', (remove_html_markup, 153)): {('<test>', (remove_html_markup, 146)), ('out', (remove_html_markup, 151))}}, {('s', (remove_html_markup, 136)): set(), ('tag', (remove_html_markup, 137)): set(), ('quote', (remove_html_markup, 138)): set(), ('out', (remove_html_markup, 139)): set(), ('c', (remove_html_markup, 141)): set(), ('<test>', (remove_html_markup, 144)): set(), ('tag', (remove_html_markup, 145)): {('<test>', (remove_html_markup, 144))}, ('<test>', (remove_html_markup, 146)): {('<test>', (remove_html_markup, 144))}, ('<test>', (remove_html_markup, 148)): {('<test>', (remove_html_markup, 146))}, ('<test>', (remove_html_markup, 150)): {('<test>', (remove_html_markup, 148))}, ('tag', (remove_html_markup, 147)): {('<test>', (remove_html_markup, 146))}, ('out', (remove_html_markup, 151)): {('<test>', (remove_html_markup, 150))}, ('<remove_html_markup() return value>', (remove_html_markup, 153)): set()})
# ignore
remove_html_markup_deps().graph()
###Output
_____no_output_____
###Markdown
Again, we can read such a graph _forward_ (starting from, say, `s`) or _backward_ (starting from the return value). Starting forward, we see how the passed string `s` flows into the `for` loop, breaking `s` into individual characters `c` that are then checked on various occasions, before flowing into the `out` return value. We also see how the various `if` conditions are all influenced by `c`, `tag`, and `quote`.
###Code
quiz("Why does the first line `tag = False` not influence anything?",
[
"Because the input contains only tags",
"Because `tag` is set to True with the first character",
"Because `tag` is not read by any variable",
"Because the input contains no tags",
], '(1 << 1 + 1 >> 1)')
###Output
_____no_output_____
###Markdown
Which are the locations that set `tag` to True? To this end, we compute the slice of `tag` at `tag = True`:
###Code
# ignore
tag_deps = Dependencies({('tag', (remove_html_markup, 145)): set(), ('<test>', (remove_html_markup, 144)): {('quote', (remove_html_markup, 138)), ('c', (remove_html_markup, 141))}, ('quote', (remove_html_markup, 138)): set(), ('c', (remove_html_markup, 141)): {('s', (remove_html_markup, 136))}, ('s', (remove_html_markup, 136)): set()}, {('tag', (remove_html_markup, 145)): {('<test>', (remove_html_markup, 144))}, ('<test>', (remove_html_markup, 144)): set(), ('quote', (remove_html_markup, 138)): set(), ('c', (remove_html_markup, 141)): set(), ('s', (remove_html_markup, 136)): set()})
tag_deps
###Output
_____no_output_____
###Markdown
We see where the value of `tag` comes from: from the characters `c` in `s` as well as `quote`, which all cause it to be set. Again, we can combine these dependencies and the listing in a single, compact view. Note, again, that there are no other locations in the code that could possibly have affected `tag` in our run.
###Code
# ignore
tag_deps.code()
quiz("How does the slice of `tag = True` change "
"for a different value of `s`?",
[
"Not at all",
"If `s` contains a quote, the `quote` slice is included, too",
"If `s` contains no HTML tag, the slice will be empty"
], '[1, 2, 3][1:]')
###Output
_____no_output_____
###Markdown
Indeed, our dynamic slices reflect dependencies as they occurred within a single execution. As the execution changes, so do the dependencies. Tracking TechniquesFor the remainder of this chapter, let us investigate means to _determine such dependencies_ automatically – by _collecting_ them during program execution. The idea is that with a single Python call, we can collect the dependencies for some computation, and present them to programmers – as graphs or as code annotations, as shown above. To track dependencies, for every variable, we need to keep track of its _origins_ – where it obtained its value, and which tests controlled its assignments. There are two ways to do so:* Wrapping Data Objects* Wrapping Data Accesses Wrapping Data Objects One way to track origins is to _wrap_ each value in a class that stores both a value and the origin of the value. If a variable `x` is initialized to zero in Line 3, for instance, we could store it as```x = (value=0, origin=)```and if it is copied in, say, Line 5 to another variable `y`, we could store this as```y = (value=0, origin=)```Such a scheme would allow us to track origins and dependencies right within the variable. In a language like Python, it is actually possibly to subclass from basic types. Here's how we create a `MyInt` subclass of `int`:
###Code
class MyInt(int):
def __new__(cls: Type, value: Any, *args: Any, **kwargs: Any) -> Any:
return super(cls, cls).__new__(cls, value)
def __repr__(self) -> str:
return f"{int(self)}"
n: MyInt = MyInt(5)
###Output
_____no_output_____
###Markdown
We can access `n` just like any integer:
###Code
n, n + 1
###Output
_____no_output_____
###Markdown
However, we can also add extra attributes to it:
###Code
n.origin = "Line 5" # type: ignore
n.origin # type: ignore
###Output
_____no_output_____
###Markdown
Such a "wrapping" scheme has the advantage of _leaving program code untouched_ – simply pass "wrapped" objects instead of the original values. However, it also has a number of drawbacks.* First, we must make sure that the "wrapper" objects are still compatible with the original values – notably by converting them back whenever needed. (What happens if an internal Python function expects an `int` and gets a `MyInt` instead?)* Second, we have to make sure that origins do not get lost during computations – which involves overloading operators such as `+`, `-`, `*`, and so on. (Right now, `MyInt(1) + 1` gives us an `int` object, not a `MyInt`.)* Third, we have to do this for _all_ data types of a language, which is pretty tedious.* Fourth and last, however, we want to track whenever a value is assigned to another variable. Python has no support for this, and thus our dependencies will necessarily be incomplete. Wrapping Data Accesses An alternate way of tracking origins is to _instrument_ the source code such that all _data read and write operations are tracked_. That is, the original data stays unchanged, but we change the code instead.In essence, for every occurrence of a variable `x` being _read_, we replace it with```python_data.get('x', x) returns x```and for every occurrence of a value being _written_ to `x`, we replace the value with```python_data.set('x', value) returns value```and let the `_data` object track these reads and writes.Hence, an assignment such as ```pythona = b + c```would get rewritten to```pythona = _data.set('a', _data.get('b', b) + _data.get('c', c))```and with every access to `_data`, we would track 1. the current _location_ in the code, and 2. whether the respective variable was read or written.For the above statement, we could deduce that `b` and `c` were read, and `a` was written – which makes `a` data dependent on `b` and `c`. The advantage of such instrumentation is that it works with _arbitrary objects_ (in Python, that is) – we do not case whether `a`, `b`, and `c` would be integers, floats, strings, lists. or any other type for which `+` would be defined. Also, the code semantics remain entirely unchanged.The disadvantage, however, is that it takes a bit of effort to exactly separate reads and writes into individual groups, and that a number of language features have to be handled separately. This is what we do in the remainder of this chapter. A Data TrackerTo implement `_data` accesses as shown above, we introduce the `DataTracker` class. As its name suggests, it keeps track of variables being read and written, and provides methods to determine the code location where this tool place.
###Code
class DataTracker:
"""Track data accesses during execution"""
def __init__(self, log: bool = False) -> None:
"""Constructor. If `log` is set, turn on logging."""
self.log = log
class DataTracker(DataTracker, StackInspector):
pass
###Output
_____no_output_____
###Markdown
`set()` is invoked when a variable is set, as in```pythonpi = _data.set('pi', 3.1415)```By default, we simply log the access using name and value. (`loads` will be used later.)
###Code
class DataTracker(DataTracker):
def set(self, name: str, value: Any, loads: Optional[Set[str]] = None) -> Any:
"""Track setting `name` to `value`."""
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: setting {name}")
return value
###Output
_____no_output_____
###Markdown
`get()` is invoked when a variable is retrieved, as in```pythonprint(_data.get('pi', pi))```By default, we simply log the access.
###Code
class DataTracker(DataTracker):
def get(self, name: str, value: Any) -> Any:
"""Track getting `value` from `name`."""
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: getting {name}")
return value
###Output
_____no_output_____
###Markdown
Here's an example of a logging `DataTracker`:
###Code
_test_data = DataTracker(log=True)
x = _test_data.set('x', 1)
_test_data.get('x', x)
###Output
<module>:1: getting x
###Markdown
Instrumenting Source CodeHow do we transform source code such that read and write accesses to variables would be automatically rewritten? To this end, we inspect the internal representation of source code, namely the _abstract syntax trees_ (ASTs). An AST represents the code as a tree, with specific node types for each syntactical element.
###Code
import ast
import astor
from bookutils import show_ast
###Output
_____no_output_____
###Markdown
Here is the tree representation for our `middle()` function. It starts with a `FunctionDef` node at the top (with the name `"middle"` and the three arguments `x`, `y`, `z` as children), followed by a subtree for each of the `If` statements, each of which contains a branch for when their condition evaluates to`True` and a branch for when their condition evaluates to `False`.
###Code
middle_tree = ast.parse(inspect.getsource(middle))
show_ast(middle_tree)
###Output
_____no_output_____
###Markdown
At the very bottom of the tree, you can see a number of `Name` nodes, referring individual variables. These are the ones we want to transform. Tracking Variable AccessOur goal is to _traverse_ the tree, identify all `Name` nodes, and convert them to respective `_data` accesses.To this end, we manipulate the AST through the Python modules `ast` and `astor`. The [official Python `ast` reference](http://docs.python.org/3/library/ast) is complete, but a bit brief; the documentation ["Green Tree Snakes - the missing Python AST docs"](https://greentreesnakes.readthedocs.io/en/latest/) provides an excellent introduction. The Python `ast` class provides a class `NodeTransformer` that allows such transformations. Subclassing from it, we provide a method `visit_Name()` that will be invoked for all `Name` nodes – and replace it by a new subtree from `make_get_data()`:
###Code
from ast import NodeTransformer, NodeVisitor, Name, AST
DATA_TRACKER = '_data'
class TrackGetTransformer(NodeTransformer):
def visit_Name(self, node: Name) -> AST:
self.generic_visit(node)
if node.id in dir(__builtins__):
# Do not change built-in names
return node
if node.id == DATA_TRACKER:
# Do not change own accesses
return node
if not isinstance(node.ctx, Load):
# Only change loads (not stores, not deletions)
return node
new_node = make_get_data(node.id)
ast.copy_location(new_node, node)
return new_node
###Output
_____no_output_____
###Markdown
Our function `make_get_data(id, method)` returns a new subtree equivalent to the Python code `_data.method('id', id)`.
###Code
from ast import Module, Load, Store, \
Attribute, With, withitem, keyword, Call, Expr, Assign, AugAssign
# Starting with Python 3.8, these will become Constant.
# from ast import Num, Str, NameConstant
# Use `ast.Num`, `ast.Str`, and `ast.NameConstant` for compatibility
def make_get_data(id: str, method: str = 'get') -> Call:
return Call(func=Attribute(value=Name(id=DATA_TRACKER, ctx=Load()),
attr=method, ctx=Load()),
args=[ast.Str(s=id), Name(id=id, ctx=Load())],
keywords=[])
###Output
_____no_output_____
###Markdown
This is the tree that `make_get_data()` produces:
###Code
show_ast(Module(body=[make_get_data("x")]))
###Output
_____no_output_____
###Markdown
How do we know that this is a correct subtree? We can carefully read the [official Python `ast` reference](http://docs.python.org/3/library/ast) and then proceed by trial and error (and apply [delta debugging](DeltaDebugger.ipynb) to determine error causes). Or – pro tip! – we can simply take a piece of Python code, parse it and use `ast.dump()` to print out how to construct the resulting AST:
###Code
print(ast.dump(ast.parse("_data.get('x', x)")))
###Output
Module(body=[Expr(value=Call(func=Attribute(value=Name(id='_data', ctx=Load()), attr='get', ctx=Load()), args=[Str(s='x'), Name(id='x', ctx=Load())], keywords=[]))])
###Markdown
If you compare the above output with the code of `make_get_data()`, above, you will find out where the source of `make_get_data()` comes from. Let us put `TrackGetTransformer` to action. Its `visit()` method calls `visit_Name()`, which then in turn transforms the `Name` nodes as we want it. This happens in place.
###Code
TrackGetTransformer().visit(middle_tree);
###Output
_____no_output_____
###Markdown
To see the effect of our transformations, we introduce a method `dump_tree()` which outputs the tree – and also compiles it to check for any inconsistencies.
###Code
def dump_tree(tree: AST) -> None:
print_content(astor.to_source(tree), '.py')
ast.fix_missing_locations(tree) # Must run this before compiling
_ = compile(tree, '<dump_tree>', 'exec')
###Output
_____no_output_____
###Markdown
We see that our transformer has properly replaced all
###Code
dump_tree(middle_tree)
###Output
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
[34mif[39;49;00m _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y) < _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z):
[34mif[39;49;00m _data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) < _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y):
[34mreturn[39;49;00m _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y)
[34melif[39;49;00m _data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) < _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z):
[34mreturn[39;49;00m _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y)
[34melif[39;49;00m _data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) > _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y):
[34mreturn[39;49;00m _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y)
[34melif[39;49;00m _data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) > _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z):
[34mreturn[39;49;00m _data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x)
[34mreturn[39;49;00m _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)
###Markdown
Let us now execute this code together with the `DataTracker()` class we previously introduced. The class `DataTrackerTester()` takes a (transformed) tree and a function. Using it as```pythonwith DataTrackerTester(tree, func): func(...)```first executes the code in _tree_ (possibly instrumenting `func`) and then the `with` body. At the end, `func` is restored to its previous (non-instrumented) version.
###Code
from types import TracebackType
class DataTrackerTester:
def __init__(self, tree: AST, func: Callable, log: bool = True) -> None:
"""Constructor. Execute the code in `tree` while instrumenting `func`."""
# We pass the source file of `func` such that we can retrieve it
# when accessing the location of the new compiled code
source = cast(str, inspect.getsourcefile(func))
self.code = compile(tree, source, 'exec')
self.func = func
self.log = log
def make_data_tracker(self) -> Any:
return DataTracker(log=self.log)
def __enter__(self) -> Any:
"""Rewrite function"""
tracker = self.make_data_tracker()
globals()[DATA_TRACKER] = tracker
exec(self.code, globals())
return tracker
def __exit__(self, exc_type: Type, exc_value: BaseException,
traceback: TracebackType) -> Optional[bool]:
"""Restore function"""
globals()[self.func.__name__] = self.func
del globals()[DATA_TRACKER]
return None
###Output
_____no_output_____
###Markdown
Here is our `middle()` function:
###Code
print_content(inspect.getsource(middle), '.py', start_line_number=1)
###Output
1 [34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z): [37m# type: ignore[39;49;00m
2 [34mif[39;49;00m y < z:
3 [34mif[39;49;00m x < y:
4 [34mreturn[39;49;00m y
5 [34melif[39;49;00m x < z:
6 [34mreturn[39;49;00m y
7 [34melse[39;49;00m:
8 [34mif[39;49;00m x > y:
9 [34mreturn[39;49;00m y
10 [34melif[39;49;00m x > z:
11 [34mreturn[39;49;00m x
12 [34mreturn[39;49;00m z
###Markdown
And here is our instrumented `middle_tree` executed with a `DataTracker` object. We see how the `middle()` tests access one argument after another.
###Code
with DataTrackerTester(middle_tree, middle):
middle(2, 1, 3)
###Output
middle:2: getting y
middle:2: getting z
middle:3: getting x
middle:3: getting y
middle:5: getting x
middle:5: getting z
middle:6: getting y
###Markdown
After `DataTrackerTester` is done, `middle` is reverted to its non-instrumented version:
###Code
middle(2, 1, 3)
###Output
_____no_output_____
###Markdown
For a complete picture of what happens during executions, we implement a number of additional code transformers. For each assignment statement `x = y`, we change it to `x = _data.set('x', y)`. This allows to __track assignments__. Excursion: Tracking Assignments For the remaining transformers, we follow the same steps as for `TrackGetTransformer`, except that our `visit_...()` methods focus on different nodes, and return different subtrees. Here, we focus on assignment nodes. We want to transform assignments `x = value` into `_data.set('x', value)` to track assignments to `x`. If the left hand side of the assignment is more complex, as in `x[y] = value`, we want to ensure the read access to `x` and `y` is also tracked. By transforming `x[y] = value` into `_data.set('x', value, loads=(x, y))`, we ensure that `x` and `y` are marked as read (as the otherwise ignored `loads` argument would be changed to `_data.get()` calls for `x` and `y`). Using `ast.dump()`, we reveal what the corresponding syntax tree has to look like:
###Code
print(ast.dump(ast.parse("_data.set('x', value, loads=(a, b))")))
###Output
Module(body=[Expr(value=Call(func=Attribute(value=Name(id='_data', ctx=Load()), attr='set', ctx=Load()), args=[Str(s='x'), Name(id='value', ctx=Load())], keywords=[keyword(arg='loads', value=Tuple(elts=[Name(id='a', ctx=Load()), Name(id='b', ctx=Load())], ctx=Load()))]))])
###Markdown
Using this structure, we can write a function `make_set_data()` which constructs such a subtree.
###Code
def make_set_data(id: str, value: Any,
loads: Optional[Set[str]] = None, method: str = 'set') -> Call:
"""
Construct a subtree _data.`method`('`id`', `value`).
If `loads` is set to [X1, X2, ...], make it
_data.`method`('`id`', `value`, loads=(X1, X2, ...))
"""
keywords=[]
if loads:
keywords = [
keyword(arg='loads',
value=ast.Tuple(
elts=[Name(id=load, ctx=Load()) for load in loads],
ctx=Load()
))
]
new_node = Call(func=Attribute(value=Name(id=DATA_TRACKER, ctx=Load()),
attr=method, ctx=Load()),
args=[ast.Str(s=id), value],
keywords=keywords)
ast.copy_location(new_node, value)
return new_node
###Output
_____no_output_____
###Markdown
The problem is, however: How do we get the name of the variable being assigned to? The left hand side of an assignment can be a complex expression such as `x[i]`. We use the leftmost name of the left hand side as name to be assigned to.
###Code
class LeftmostNameVisitor(NodeVisitor):
def __init__(self) -> None:
super().__init__()
self.leftmost_name: Optional[str] = None
def visit_Name(self, node: Name) -> None:
if self.leftmost_name is None:
self.leftmost_name = node.id
self.generic_visit(node)
def leftmost_name(tree: AST) -> Optional[str]:
visitor = LeftmostNameVisitor()
visitor.visit(tree)
return visitor.leftmost_name
leftmost_name(ast.parse('a[x] = 25'))
###Output
_____no_output_____
###Markdown
Python also allows _tuple assignments_, as in `(a, b, c) = (1, 2, 3)`. We extract all variables being stored (that is, expressions whose `ctx` attribute is `Store()`) and extract their (leftmost) names.
###Code
class StoreVisitor(NodeVisitor):
def __init__(self) -> None:
super().__init__()
self.names: Set[str] = set()
def visit(self, node: AST) -> None:
if hasattr(node, 'ctx') and isinstance(node.ctx, Store): # type: ignore
name = leftmost_name(node)
if name:
self.names.add(name)
self.generic_visit(node)
def store_names(tree: AST) -> Set[str]:
visitor = StoreVisitor()
visitor.visit(tree)
return visitor.names
store_names(ast.parse('a[x], b[y], c = 1, 2, 3'))
###Output
_____no_output_____
###Markdown
For complex assignments, we also want to access the names read in the left hand side of an expression.
###Code
class LoadVisitor(NodeVisitor):
def __init__(self) -> None:
super().__init__()
self.names: Set[str] = set()
def visit(self, node: AST) -> None:
if hasattr(node, 'ctx') and isinstance(node.ctx, Load): # type: ignore
name = leftmost_name(node)
if name is not None:
self.names.add(name)
self.generic_visit(node)
def load_names(tree: AST) -> Set[str]:
visitor = LoadVisitor()
visitor.visit(tree)
return visitor.names
load_names(ast.parse('a[x], b[y], c = 1, 2, 3'))
###Output
_____no_output_____
###Markdown
With this, we can now define `TrackSetTransformer` as a transformer for regular assignments. Note that in Python, an assignment can have multiple targets, as in `a = b = c`; we assign the data dependencies of `c` to them all.
###Code
class TrackSetTransformer(NodeTransformer):
def visit_Assign(self, node: Assign) -> Assign:
value = astor.to_source(node.value)
if value.startswith(DATA_TRACKER + '.set'):
return node # Do not apply twice
for target in node.targets:
loads = load_names(target)
for store_name in store_names(target):
node.value = make_set_data(store_name, node.value,
loads=loads)
loads = set()
return node
###Output
_____no_output_____
###Markdown
The special form of "augmented assign" needs special treatment. We change statements of the form `x += y` to `x += _data.augment('x', y)`.
###Code
class TrackSetTransformer(TrackSetTransformer):
def visit_AugAssign(self, node: AugAssign) -> AugAssign:
value = astor.to_source(node.value)
if value.startswith(DATA_TRACKER):
return node # Do not apply twice
id = cast(str, leftmost_name(node.target))
node.value = make_set_data(id, node.value, method='augment')
return node
###Output
_____no_output_____
###Markdown
The corresponding `augment()` method uses a combination of `set()` and `get()` to reflect the semantics.
###Code
class DataTracker(DataTracker):
def augment(self, name: str, value: Any) -> Any:
"""Track augmenting `name` with `value`.
To be overloaded in subclasses."""
self.set(name, self.get(name, value))
return value
###Output
_____no_output_____
###Markdown
Here's both of these transformers in action. Our original function has a number of assignments:
###Code
def assign_test(x): # type: ignore
fourty_two = forty_two = 42
a, b, c = 1, 2, 3
c[d[x]].attr = 47
foo *= bar + 1
assign_tree = ast.parse(inspect.getsource(assign_test))
TrackSetTransformer().visit(assign_tree)
dump_tree(assign_tree)
###Output
[34mdef[39;49;00m [32massign_test[39;49;00m(x):
fourty_two = forty_two = _data.set([33m'[39;49;00m[33mforty_two[39;49;00m[33m'[39;49;00m, _data.set([33m'[39;49;00m[33mfourty_two[39;49;00m[33m'[39;49;00m, [34m42[39;49;00m)
)
a, b, c = _data.set([33m'[39;49;00m[33mc[39;49;00m[33m'[39;49;00m, _data.set([33m'[39;49;00m[33mb[39;49;00m[33m'[39;49;00m, _data.set([33m'[39;49;00m[33ma[39;49;00m[33m'[39;49;00m, ([34m1[39;49;00m, [34m2[39;49;00m, [34m3[39;49;00m))))
c[d[x]].attr = _data.set([33m'[39;49;00m[33mc[39;49;00m[33m'[39;49;00m, [34m47[39;49;00m, loads=(x, c, d))
foo *= _data.augment([33m'[39;49;00m[33mfoo[39;49;00m[33m'[39;49;00m, bar + [34m1[39;49;00m)
###Markdown
If we later apply our transformer for data accesses, we can see that we track all variable reads and writes.
###Code
TrackGetTransformer().visit(assign_tree)
dump_tree(assign_tree)
###Output
[34mdef[39;49;00m [32massign_test[39;49;00m(x):
fourty_two = forty_two = _data.set([33m'[39;49;00m[33mforty_two[39;49;00m[33m'[39;49;00m, _data.set([33m'[39;49;00m[33mfourty_two[39;49;00m[33m'[39;49;00m, [34m42[39;49;00m)
)
a, b, c = _data.set([33m'[39;49;00m[33mc[39;49;00m[33m'[39;49;00m, _data.set([33m'[39;49;00m[33mb[39;49;00m[33m'[39;49;00m, _data.set([33m'[39;49;00m[33ma[39;49;00m[33m'[39;49;00m, ([34m1[39;49;00m, [34m2[39;49;00m, [34m3[39;49;00m))))
_data.get([33m'[39;49;00m[33mc[39;49;00m[33m'[39;49;00m, c)[_data.get([33m'[39;49;00m[33md[39;49;00m[33m'[39;49;00m, d)[_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x)]].attr = _data.set(
[33m'[39;49;00m[33mc[39;49;00m[33m'[39;49;00m, [34m47[39;49;00m, loads=(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x), _data.get([33m'[39;49;00m[33mc[39;49;00m[33m'[39;49;00m, c), _data.get([33m'[39;49;00m[33md[39;49;00m[33m'[39;49;00m,
d)))
foo *= _data.augment([33m'[39;49;00m[33mfoo[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33mbar[39;49;00m[33m'[39;49;00m, bar) + [34m1[39;49;00m)
###Markdown
End of Excursion Each return statement `return x` is transformed to `return _data.set('', x)`. This allows to __track return values__. Excursion: Tracking Return Values Our `TrackReturnTransformer` also makes use of `make_set_data()`.
###Code
class TrackReturnTransformer(NodeTransformer):
def __init__(self) -> None:
self.function_name: Optional[str] = None
super().__init__()
def visit_FunctionDef(self, node: Union[ast.FunctionDef, ast.AsyncFunctionDef]) -> AST:
outer_name = self.function_name
self.function_name = node.name # Save current name
self.generic_visit(node)
self.function_name = outer_name
return node
def visit_AsyncFunctionDef(self, node: ast.AsyncFunctionDef) -> AST:
return self.visit_FunctionDef(node)
def return_value(self, tp: str = "return") -> str:
if self.function_name is None:
return f"<{tp} value>"
else:
return f"<{self.function_name}() {tp} value>"
def visit_return_or_yield(self, node: Union[ast.Return, ast.Yield, ast.YieldFrom],
tp: str = "return") -> AST:
if node.value is not None:
value = astor.to_source(node.value)
if not value.startswith(DATA_TRACKER + '.set'):
node.value = make_set_data(self.return_value(tp), node.value)
return node
def visit_Return(self, node: ast.Return) -> AST:
return self.visit_return_or_yield(node, tp="return")
def visit_Yield(self, node: ast.Yield) -> AST:
return self.visit_return_or_yield(node, tp="yield")
def visit_YieldFrom(self, node: ast.YieldFrom) -> AST:
return self.visit_return_or_yield(node, tp="yield")
###Output
_____no_output_____
###Markdown
This is the effect of `TrackReturnTransformer`. We see that all return values are saved, and thus all locations of the corresponding return statements are tracked.
###Code
TrackReturnTransformer().visit(middle_tree)
dump_tree(middle_tree)
with DataTrackerTester(middle_tree, middle):
middle(2, 1, 3)
###Output
middle:2: getting y
middle:2: getting z
middle:3: getting x
middle:3: getting y
middle:5: getting x
middle:5: getting z
middle:6: getting y
middle:6: setting <middle() return value>
###Markdown
End of Excursion To track __control dependencies__, for every block controlled by an `if`, `while`, or `for`:1. We wrap their tests in a `_data.test()` wrapper. This allows us to assign pseudo-variables like `` which hold the conditions.2. We wrap their controlled blocks in a `with` statement. This allows us to track the variables read right before the `with` (= the controlling variables), and to restore the current controlling variables when the block is left.A statement```pythonif cond: body```thus becomes```pythonif _data.test(cond): with _data: body``` Excursion: Tracking Control To modify control statements, we traverse the tree, looking for `If` nodes:
###Code
class TrackControlTransformer(NodeTransformer):
def visit_If(self, node: ast.If) -> ast.If:
self.generic_visit(node)
node.test = self.make_test(node.test)
node.body = self.make_with(node.body)
node.orelse = self.make_with(node.orelse)
return node
def make_with(self, block: List[ast.stmt]) -> List[ast.stmt]:
...
def make_test(self, test: ast.expr) -> ast.expr:
...
###Output
_____no_output_____
###Markdown
The subtrees come from helper functions `make_with()` and `make_test()`. Again, all these subtrees are obtained via `ast.dump()`.
###Code
class TrackControlTransformer(TrackControlTransformer):
def make_with(self, block: List[ast.stmt]) -> List[ast.stmt]:
"""Create a subtree 'with _data: `block`'"""
if len(block) == 0:
return []
block_as_text = astor.to_source(block[0])
if block_as_text.startswith('with ' + DATA_TRACKER):
return block # Do not apply twice
new_node = With(
items=[
withitem(
context_expr=Name(id=DATA_TRACKER, ctx=Load()),
optional_vars=None)
],
body=block
)
ast.copy_location(new_node, block[0])
return [new_node]
class TrackControlTransformer(TrackControlTransformer):
def make_test(self, test: ast.expr) -> ast.expr:
test_as_text = astor.to_source(test)
if test_as_text.startswith(DATA_TRACKER + '.test'):
return test # Do not apply twice
new_test = Call(func=Attribute(value=Name(id=DATA_TRACKER, ctx=Load()),
attr='test',
ctx=Load()),
args=[test],
keywords=[])
ast.copy_location(new_test, test)
return new_test
###Output
_____no_output_____
###Markdown
`while` loops are handled just like `if` constructs.
###Code
class TrackControlTransformer(TrackControlTransformer):
def visit_While(self, node: ast.While) -> ast.While:
self.generic_visit(node)
node.test = self.make_test(node.test)
node.body = self.make_with(node.body)
node.orelse = self.make_with(node.orelse)
return node
###Output
_____no_output_____
###Markdown
`for` loops gets a different treatment, as there is no condition that would control the body. Still, we ensure that setting the iterator variable is properly tracked.
###Code
class TrackControlTransformer(TrackControlTransformer):
# regular `for` loop
def visit_For(self, node: Union[ast.For, ast.AsyncFor]) -> AST:
self.generic_visit(node)
id = astor.to_source(node.target).strip()
node.iter = make_set_data(id, node.iter)
# Uncomment if you want iterators to control their bodies
# node.body = self.make_with(node.body)
# node.orelse = self.make_with(node.orelse)
return node
# `for` loops in async functions
def visit_AsyncFor(self, node: ast.AsyncFor) -> AST:
return self.visit_For(node)
# `for` clause in comprehensions
def visit_comprehension(self, node: ast.comprehension) -> AST:
self.generic_visit(node)
id = astor.to_source(node.target).strip()
node.iter = make_set_data(id, node.iter)
return node
###Output
_____no_output_____
###Markdown
Here is the effect of `TrackControlTransformer`:
###Code
TrackControlTransformer().visit(middle_tree)
dump_tree(middle_tree)
###Output
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y) < _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)):
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) < _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m, _data.get(
[33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y))
[34melse[39;49;00m:
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) < _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m,
_data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y))
[34melse[39;49;00m:
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) > _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m, _data.get(
[33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y))
[34melse[39;49;00m:
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) > _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m,
_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x))
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z))
###Markdown
We extend `DataTracker` to also log these events:
###Code
class DataTracker(DataTracker):
def test(self, cond: AST) -> AST:
"""Test condition `cond`. To be overloaded in subclasses."""
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: testing condition")
return cond
class DataTracker(DataTracker):
def __enter__(self) -> Any:
"""Enter `with` block. To be overloaded in subclasses."""
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: entering block")
return self
def __exit__(self, exc_type: Type, exc_value: BaseException,
traceback: TracebackType) -> Optional[bool]:
"""Exit `with` block. To be overloaded in subclasses."""
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: exiting block")
return None
with DataTrackerTester(middle_tree, middle):
middle(2, 1, 3)
###Output
middle:2: getting y
middle:2: getting z
middle:2: testing condition
middle:3: entering block
middle:3: getting x
middle:3: getting y
middle:3: testing condition
middle:5: entering block
middle:5: getting x
middle:5: getting z
middle:5: testing condition
middle:6: entering block
middle:6: getting y
middle:6: setting <middle() return value>
middle:6: exiting block
middle:6: exiting block
middle:6: exiting block
###Markdown
End of Excursion We also want to be able to __track calls__ across multiple functions. To this end, we wrap each call```pythonfunc(arg1, arg2, ...)```into```python_data.ret(_data.call(func)(_data.arg(arg1), _data.arg(arg2), ...))```each of which simply pass through their given argument, but which allow to track the beginning of calls (`call()`), the computation of arguments (`arg()`), and the return of the call (`ret()`), respectively. Excursion: Tracking Calls and Arguments Our `TrackCallTransformer` visits all `Call` nodes, applying the transformations as shown above.
###Code
class TrackCallTransformer(NodeTransformer):
def make_call(self, node: AST, func: str,
pos: Optional[int] = None, kw: Optional[str] = None) -> Call:
"""Return _data.call(`func`)(`node`)"""
keywords = []
# `Num()` and `Str()` are deprecated in favor of `Constant()`
if pos:
keywords.append(keyword(arg='pos', value=ast.Num(pos)))
if kw:
keywords.append(keyword(arg='kw', value=ast.Str(kw)))
return Call(func=Attribute(value=Name(id=DATA_TRACKER,
ctx=Load()),
attr=func,
ctx=Load()),
args=[node],
keywords=keywords)
def visit_Call(self, node: Call) -> Call:
self.generic_visit(node)
call_as_text = astor.to_source(node)
if call_as_text.startswith(DATA_TRACKER + '.ret'):
return node # Already applied
func_as_text = astor.to_source(node)
if func_as_text.startswith(DATA_TRACKER + '.'):
return node # Own function
new_args = []
for n, arg in enumerate(node.args):
new_args.append(self.make_call(arg, 'arg', pos=n + 1))
node.args = cast(List[ast.expr], new_args)
for kw in node.keywords:
id = kw.arg if hasattr(kw, 'arg') else None
kw.value = self.make_call(kw.value, 'arg', kw=id)
node.func = self.make_call(node.func, 'call')
return self.make_call(node, 'ret')
###Output
_____no_output_____
###Markdown
Our example function `middle()` does not contain any calls, but here is a function that invokes `middle()` twice:
###Code
def test_call() -> int:
x = middle(1, 2, z=middle(1, 2, 3))
return x
call_tree = ast.parse(inspect.getsource(test_call))
dump_tree(call_tree)
###Output
[34mdef[39;49;00m [32mtest_call[39;49;00m() ->[36mint[39;49;00m:
x = middle([34m1[39;49;00m, [34m2[39;49;00m, z=middle([34m1[39;49;00m, [34m2[39;49;00m, [34m3[39;49;00m))
[34mreturn[39;49;00m x
###Markdown
If we invoke `TrackCallTransformer` on this testing function, we get the following transformed code:
###Code
TrackCallTransformer().visit(call_tree);
dump_tree(call_tree)
def f() -> bool:
return math.isclose(1, 1.0)
f_tree = ast.parse(inspect.getsource(f))
dump_tree(f_tree)
TrackCallTransformer().visit(f_tree);
dump_tree(f_tree)
###Output
[34mdef[39;49;00m [32mf[39;49;00m() ->[36mbool[39;49;00m:
[34mreturn[39;49;00m _data.ret(_data.call(math.isclose)(_data.arg([34m1[39;49;00m, pos=[34m1[39;49;00m), _data.
arg([34m1.0[39;49;00m, pos=[34m2[39;49;00m)))
###Markdown
As before, our default `arg()`, `ret()`, and `call()` methods simply log the event and pass through the given value.
###Code
class DataTracker(DataTracker):
def arg(self, value: Any, pos: Optional[int] = None, kw: Optional[str] = None) -> Any:
"""
Track `value` being passed as argument.
`pos` (if given) is the argument position (starting with 1).
`kw` (if given) is the argument keyword.
"""
if self.log:
caller_func, lineno = self.caller_location()
info = ""
if pos:
info += f" #{pos}"
if kw:
info += f" {repr(kw)}"
print(f"{caller_func.__name__}:{lineno}: pushing arg{info}")
return value
class DataTracker(DataTracker):
def ret(self, value: Any) -> Any:
"""Track `value` being used as return value."""
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: returned from call")
return value
class DataTracker(DataTracker):
def call(self, func: Callable) -> Callable:
"""Track a call to `func`."""
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: calling {func}")
return func
dump_tree(call_tree)
with DataTrackerTester(call_tree, test_call):
test_call()
test_call()
###Output
_____no_output_____
###Markdown
End of Excursion On the receiving end, for each function argument `x`, we insert a call `_data.param('x', x, [position info])` to initialize `x`. This is useful for __tracking parameters across function calls.__ Excursion: Tracking Parameters Again, we use `ast.dump()` to determine the correct syntax tree:
###Code
print(ast.dump(ast.parse("_data.param('x', x, pos=1, last=True)")))
class TrackParamsTransformer(NodeTransformer):
def visit_FunctionDef(self, node: ast.FunctionDef) -> ast.FunctionDef:
self.generic_visit(node)
named_args = []
for child in ast.iter_child_nodes(node.args):
if isinstance(child, ast.arg):
named_args.append(child)
create_stmts = []
for n, child in enumerate(named_args):
keywords=[keyword(arg='pos', value=ast.Num(n=n + 1))]
if child is node.args.vararg:
keywords.append(keyword(arg='vararg', value=ast.Str(s='*')))
if child is node.args.kwarg:
keywords.append(keyword(arg='vararg', value=ast.Str(s='**')))
if n == len(named_args) - 1:
keywords.append(keyword(arg='last',
value=ast.NameConstant(value=True)))
create_stmt = Expr(
value=Call(
func=Attribute(value=Name(id=DATA_TRACKER, ctx=Load()),
attr='param', ctx=Load()),
args=[ast.Str(s=child.arg),
Name(id=child.arg, ctx=Load())
],
keywords=keywords
)
)
ast.copy_location(create_stmt, node)
create_stmts.append(create_stmt)
node.body = cast(List[ast.stmt], create_stmts) + node.body
return node
###Output
_____no_output_____
###Markdown
This is the effect of `TrackParamsTransformer()`. You see how the first three parameters are all initialized.
###Code
TrackParamsTransformer().visit(middle_tree)
dump_tree(middle_tree)
###Output
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
_data.param([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x, pos=[34m1[39;49;00m)
_data.param([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y, pos=[34m2[39;49;00m)
_data.param([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z, pos=[34m3[39;49;00m, last=[34mTrue[39;49;00m)
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y) < _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)):
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) < _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m, _data.get(
[33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y))
[34melse[39;49;00m:
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) < _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m,
_data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y))
[34melse[39;49;00m:
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) > _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m, _data.get(
[33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y))
[34melse[39;49;00m:
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) > _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m,
_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x))
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z))
###Markdown
By default, the `DataTracker` `param()` method simply calls `set()` to set variables.
###Code
class DataTracker(DataTracker):
def param(self, name: str, value: Any,
pos: Optional[int] = None, vararg: str = '', last: bool = False) -> Any:
"""
At the beginning of a function, track parameter `name` being set to `value`.
`pos` is the position of the argument (starting with 1).
`vararg` is "*" if `name` is a vararg parameter (as in *args),
and "**" is `name` is a kwargs parameter (as in *kwargs).
`last` is True if `name` is the last parameter.
"""
if self.log:
caller_func, lineno = self.caller_location()
info = ""
if pos is not None:
info += f" #{pos}"
print(f"{caller_func.__name__}:{lineno}: initializing {vararg}{name}{info}")
return self.set(name, value)
with DataTrackerTester(middle_tree, middle):
middle(2, 1, 3)
def args_test(x, *args, **kwargs): # type: ignore
print(x, *args, **kwargs)
args_tree = ast.parse(inspect.getsource(args_test))
TrackParamsTransformer().visit(args_tree)
dump_tree(args_tree)
with DataTrackerTester(args_tree, args_test):
args_test(1, 2, 3)
###Output
args_test:1: initializing x #1
args_test:1: setting x
args_test:1: initializing *args #2
args_test:1: setting args
args_test:1: initializing **kwargs #3
args_test:1: setting kwargs
1 2 3
###Markdown
End of Excursion What do we obtain after we have applied all these transformers on `middle()`? We see that the code now contains quite a load of instrumentation.
###Code
dump_tree(middle_tree)
###Output
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
_data.param([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x, pos=[34m1[39;49;00m)
_data.param([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y, pos=[34m2[39;49;00m)
_data.param([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z, pos=[34m3[39;49;00m, last=[34mTrue[39;49;00m)
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y) < _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)):
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) < _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m, _data.get(
[33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y))
[34melse[39;49;00m:
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) < _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m,
_data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y))
[34melse[39;49;00m:
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) > _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m, _data.get(
[33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y))
[34melse[39;49;00m:
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) > _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m,
_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x))
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z))
###Markdown
And when we execute this code, we see that we can track quite a number of events, while the code semantics stay unchanged.
###Code
with DataTrackerTester(middle_tree, middle):
m = middle(2, 1, 3)
m
###Output
middle:1: initializing x #1
middle:1: setting x
middle:1: initializing y #2
middle:1: setting y
middle:1: initializing z #3
middle:1: setting z
middle:2: getting y
middle:2: getting z
middle:2: testing condition
middle:3: entering block
middle:3: getting x
middle:3: getting y
middle:3: testing condition
middle:5: entering block
middle:5: getting x
middle:5: getting z
middle:5: testing condition
middle:6: entering block
middle:6: getting y
middle:6: setting <middle() return value>
middle:6: exiting block
middle:6: exiting block
middle:6: exiting block
###Markdown
Excursion: Transformer Stress Test We stress test our transformers by instrumenting, transforming, and compiling a number of modules.
###Code
import Assertions # minor dependency
import Debugger # minor dependency
for module in [Assertions, Debugger, inspect, ast, astor]:
module_tree = ast.parse(inspect.getsource(module))
TrackCallTransformer().visit(module_tree)
TrackSetTransformer().visit(module_tree)
TrackGetTransformer().visit(module_tree)
TrackControlTransformer().visit(module_tree)
TrackReturnTransformer().visit(module_tree)
TrackParamsTransformer().visit(module_tree)
# dump_tree(module_tree)
ast.fix_missing_locations(module_tree) # Must run this before compiling
module_code = compile(module_tree, '<stress_test>', 'exec')
print(f"{repr(module.__name__)} instrumented successfully.")
###Output
'Assertions' instrumented successfully.
'Debugger' instrumented successfully.
'inspect' instrumented successfully.
'ast' instrumented successfully.
'astor' instrumented successfully.
###Markdown
End of Excursion Our next step will now be not only to _log_ these events, but to actually construct _dependencies_ from them. Tracking Dependencies To construct dependencies from variable accesses, we subclass `DataTracker` into `DependencyTracker` – a class that actually keeps track of all these dependencies. Its constructor initializes a number of variables we will discuss below.
###Code
class DependencyTracker(DataTracker):
"""Track dependencies during execution"""
def __init__(self, *args: Any, **kwargs: Any) -> None:
"""Constructor. Arguments are passed to DataTracker.__init__()"""
super().__init__(*args, **kwargs)
self.origins: Dict[str, Location] = {} # Where current variables were last set
self.data_dependencies: Dependency = {} # As with Dependencies, above
self.control_dependencies: Dependency = {}
self.last_read: List[str] = [] # List of last read variables
self.last_checked_location = (StackInspector.unknown, 1)
self._ignore_location_change = False
self.data: List[List[str]] = [[]] # Data stack
self.control: List[List[str]] = [[]] # Control stack
self.frames: List[Dict[Union[int, str], Any]] = [{}] # Argument stack
self.args: Dict[Union[int, str], Any] = {} # Current args
###Output
_____no_output_____
###Markdown
Data DependenciesThe first job of our `DependencyTracker` is to construct dependencies between _read_ and _written_ variables. Reading VariablesAs in `DataTracker`, the key method of `DependencyTracker` again is `get()`, invoked as `_data.get('x', x)` whenever a variable `x` is read. First and foremost, it appends the name of the read variable to the list `last_read`.
###Code
class DependencyTracker(DependencyTracker):
def get(self, name: str, value: Any) -> Any:
"""Track a read access for variable `name` with value `value`"""
self.check_location()
self.last_read.append(name)
return super().get(name, value)
def check_location(self) -> None:
pass # More on that below
x = 5
y = 3
_test_data = DependencyTracker(log=True)
_test_data.get('x', x) + _test_data.get('y', y)
_test_data.last_read
###Output
_____no_output_____
###Markdown
Checking Locations However, before appending the read variable to `last_read`, `_data.get()` does one more thing. By invoking `check_location()`, it clears the `last_read` list if we have reached a new line in the execution. This avoids situations such as```pythonxyz = a + b```where `x` and `y` are, well, read, but do not affect the last line. Therefore, with every new line, the list of last read lines is cleared.
###Code
class DependencyTracker(DependencyTracker):
def clear_read(self) -> None:
"""Clear set of read variables"""
if self.log:
direct_caller = inspect.currentframe().f_back.f_code.co_name # type: ignore
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: "
f"clearing read variables {self.last_read} "
f"(from {direct_caller})")
self.last_read = []
def check_location(self) -> None:
"""If we are in a new location, clear set of read variables"""
location = self.caller_location()
func, lineno = location
last_func, last_lineno = self.last_checked_location
if self.last_checked_location != location:
if self._ignore_location_change:
self._ignore_location_change = False
elif func.__name__.startswith('<'):
# Entering list comprehension, eval(), exec(), ...
pass
elif last_func.__name__.startswith('<'):
# Exiting list comprehension, eval(), exec(), ...
pass
else:
# Standard case
self.clear_read()
self.last_checked_location = location
###Output
_____no_output_____
###Markdown
Two methods can suppress this reset of the `last_read` list: * `ignore_next_location_change()` suppresses the reset for the next line. This is useful when returning from a function, when the return value is still in the list of "read" variables.* `ignore_location_change()` suppresses the reset for the current line. This is useful if we already have returned from a function call.
###Code
class DependencyTracker(DependencyTracker):
def ignore_next_location_change(self) -> None:
self._ignore_location_change = True
def ignore_location_change(self) -> None:
self.last_checked_location = self.caller_location()
###Output
_____no_output_____
###Markdown
Watch how `DependencyTracker` resets `last_read` when a new line is executed:
###Code
_test_data = DependencyTracker()
_test_data.get('x', x) + _test_data.get('y', y)
_test_data.last_read
a = 42
b = -1
_test_data.get('a', a) + _test_data.get('b', b)
_test_data.last_read
###Output
_____no_output_____
###Markdown
Setting VariablesThe method `set()` creates dependencies. It is invoked as `_data.set('x', value)` whenever a variable `x` is set. First and foremost, it takes the list of variables read `last_read`, and for each of the variables $v$, it takes their origin $o$ (the place where they were last set) and appends the pair ($v$, $o$) to the list of data dependencies. It then does a similar thing with control dependencies (more on these below), and finally marks (in `self.origins`) the current location of $v$.
###Code
import itertools
class DependencyTracker(DependencyTracker):
TEST = '<test>' # Name of pseudo-variables for testing conditions
def set(self, name: str, value: Any, loads: Optional[Set[str]] = None) -> Any:
"""Add a dependency for `name` = `value`"""
def add_dependencies(dependencies: Set[Node],
vars_read: List[str], tp: str) -> None:
"""Add origins of `vars_read` to `dependencies`."""
for var_read in vars_read:
if var_read in self.origins:
if var_read == self.TEST and tp == "data":
# Can't have data dependencies on conditions
continue
origin = self.origins[var_read]
dependencies.add((var_read, origin))
if self.log:
origin_func, origin_lineno = origin
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: "
f"new {tp} dependency: "
f"{name} <= {var_read} "
f"({origin_func.__name__}:{origin_lineno})")
self.check_location()
ret = super().set(name, value)
location = self.caller_location()
add_dependencies(self.data_dependencies.setdefault
((name, location), set()),
self.last_read, tp="data")
add_dependencies(self.control_dependencies.setdefault
((name, location), set()),
cast(List[str], itertools.chain.from_iterable(self.control)),
tp="control")
self.origins[name] = location
# Reset read info for next line
self.last_read = [name]
return ret
def dependencies(self) -> Dependencies:
"""Return dependencies"""
return Dependencies(self.data_dependencies,
self.control_dependencies)
###Output
_____no_output_____
###Markdown
Let us illustrate `set()` by example. Here's a set of variables read and written:
###Code
_test_data = DependencyTracker()
x = _test_data.set('x', 1)
y = _test_data.set('y', _test_data.get('x', x))
z = _test_data.set('z', _test_data.get('x', x) + _test_data.get('y', y))
###Output
_____no_output_____
###Markdown
The attribute `origins` saves for each variable where it was last written:
###Code
_test_data.origins
###Output
_____no_output_____
###Markdown
The attribute `data_dependencies` tracks for each variable the variables it was read from:
###Code
_test_data.data_dependencies
###Output
_____no_output_____
###Markdown
Hence, the above code already gives us a small dependency graph:
###Code
# ignore
_test_data.dependencies().graph()
###Output
_____no_output_____
###Markdown
In the remainder of this section, we define methods to* track control dependencies (`test()`, `__enter__()`, `__exit__()`)* track function calls and returns (`call()`, `ret()`)* track function arguments (`arg()`, `param()`)* check the validity of our dependencies (`validate()`).Like our `get()` and `set()` methods above, these work by refining the appropriate methods defined in the `DataTracker` class, building on our `NodeTransformer` transformations. Excursion: Control Dependencies Let us detail control dependencies. As discussed with `DataTracker()`, we invoke `test()` methods for all control conditions, and place the controlled blocks into `with` clauses. The `test()` method simply sets a `` variable; this also places it in `last_read`.
###Code
class DependencyTracker(DependencyTracker):
def test(self, value: Any) -> Any:
"""Track a test for condition `value`"""
self.set(self.TEST, value)
return super().test(value)
###Output
_____no_output_____
###Markdown
When entering a `with` block, the set of `last_read` variables holds the `` variable read. We save it in the `control` stack, with the effect of any further variables written now being marked as controlled by ``.
###Code
class DependencyTracker(DependencyTracker):
def __enter__(self) -> Any:
"""Track entering an if/while/for block"""
self.control.append(self.last_read)
self.clear_read()
return super().__enter__()
###Output
_____no_output_____
###Markdown
When we exit the `with` block, we restore earlier `last_read` values, preparing for `else` blocks.
###Code
class DependencyTracker(DependencyTracker):
def __exit__(self, exc_type: Type, exc_value: BaseException,
traceback: TracebackType) -> Optional[bool]:
"""Track exiting an if/while/for block"""
self.clear_read()
self.last_read = self.control.pop()
self.ignore_next_location_change()
return super().__exit__(exc_type, exc_value, traceback)
###Output
_____no_output_____
###Markdown
Here's an example of all these parts in action:
###Code
_test_data = DependencyTracker()
x = _test_data.set('x', 1)
y = _test_data.set('y', _test_data.get('x', x))
if _test_data.test(_test_data.get('x', x) >= _test_data.get('y', y)):
with _test_data:
z = _test_data.set('z',
_test_data.get('x', x) + _test_data.get('y', y))
_test_data.control_dependencies
###Output
_____no_output_____
###Markdown
The control dependency for `z` is reflected in the dependency graph:
###Code
# ignore
_test_data.dependencies()
###Output
_____no_output_____
###Markdown
End of Excursion Excursion: Calls and Returns To handle complex expressions involving functions, we introduce a _data stack_. Every time we invoke a function `func` (`call()` is invoked), we save the list of current variables read `last_read` on the `data` stack; when we return (`ret()` is invoked), we restore `last_read`. This also ensures that only those variables read while evaluating arguments will flow into the function call.
###Code
class DependencyTracker(DependencyTracker):
def call(self, func: Callable) -> Callable:
"""Track a call of function `func`"""
super().call(func)
if inspect.isgeneratorfunction(func):
return self.call_generator(func)
# Save context
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: "
f"saving read variables {self.last_read}")
self.data.append(self.last_read)
self.clear_read()
self.ignore_next_location_change()
self.frames.append(self.args)
self.args = {}
return func
def call_generator(self, func: Callable) -> Callable:
...
def in_generator(self) -> bool:
...
class DependencyTracker(DependencyTracker):
def ret(self, value: Any) -> Any:
"""Track a function return"""
super().ret(value)
if self.in_generator():
return self.ret_generator(value)
# Restore old context and add return value
ret_name = None
for var in self.last_read:
if var.startswith("<"): # "<return value>"
ret_name = var
self.last_read = self.data.pop()
if ret_name is not None:
self.last_read.append(ret_name)
self.ignore_location_change()
self.args = self.frames.pop()
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: "
f"restored read variables {self.last_read}")
return value
def ret_generator(self, generator: Any) -> Any:
...
###Output
_____no_output_____
###Markdown
Generator functions (those which `yield` a value) are not "called" in the sense that Python transfers control to them; instead, a "call" to a generator function creates a generator that is evaluated on demand. We mark generator function "calls" by saving `None` on the stacks. When the generator function returns the generator, we wrap the generator such that the arguments are being restored when it is invoked.
###Code
import copy
class DependencyTracker(DependencyTracker):
def in_generator(self) -> bool:
"""True if we are calling a generator function"""
return len(self.data) > 0 and self.data[-1] is None
def call_generator(self, func: Callable) -> Callable:
"""Track a call of a generator function"""
# Mark the fact that we're in a generator with `None` values
self.data.append(None) # type: ignore
self.frames.append(None) # type: ignore
assert self.in_generator()
self.clear_read()
return func
def ret_generator(self, generator: Any) -> Any:
"""Track the return of a generator function"""
# Pop the two 'None' values pushed earlier
self.data.pop()
self.frames.pop()
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: "
f"wrapping generator {generator} (args={self.args})")
# At this point, we already have collected the args.
# The returned generator depends on all of them.
for arg in self.args:
self.last_read += self.args[arg]
# Wrap the generator such that the args are restored
# when it is actually invoked, such that we can map them
# to parameters.
saved_args = copy.deepcopy(self.args)
def wrapper() -> Generator[Any, None, None]:
self.args = saved_args
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: "
f"calling generator (args={self.args})")
self.ignore_next_location_change()
yield from generator
return wrapper()
###Output
_____no_output_____
###Markdown
We see an example of how function calls and returns work in conjunction with function arguments, discussed in the next section. End of Excursion Excursion: Function Arguments Finally, we handle parameters and arguments. The `args` stack holds the current stack of function arguments, holding the `last_read` variable for each argument.
###Code
class DependencyTracker(DependencyTracker):
def arg(self, value: Any, pos: Optional[int] = None, kw: Optional[str] = None) -> Any:
"""
Track passing an argument `value`
(with given position `pos` 1..n or keyword `kw`)
"""
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: "
f"saving args read {self.last_read}")
if pos:
self.args[pos] = self.last_read
if kw:
self.args[kw] = self.last_read
self.clear_read()
return super().arg(value, pos, kw)
###Output
_____no_output_____
###Markdown
When accessing the arguments (with `param()`), we can retrieve this set of read variables for each argument.
###Code
class DependencyTracker(DependencyTracker):
def param(self, name: str, value: Any,
pos: Optional[int] = None, vararg: str = "", last: bool = False) -> Any:
"""
Track getting a parameter `name` with value `value`
(with given position `pos`).
vararg parameters are indicated by setting `varargs` to
'*' (*args) or '**' (**kwargs)
"""
self.clear_read()
if vararg == '*':
# We overapproximate by setting `args` to _all_ positional args
for index in self.args:
if isinstance(index, int) and pos is not None and index >= pos:
self.last_read += self.args[index]
elif vararg == '**':
# We overapproximate by setting `kwargs` to _all_ passed keyword args
for index in self.args:
if isinstance(index, str):
self.last_read += self.args[index]
elif name in self.args:
self.last_read = self.args[name]
elif pos in self.args:
self.last_read = self.args[pos]
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: "
f"restored params read {self.last_read}")
self.ignore_location_change()
ret = super().param(name, value, pos)
if last:
self.clear_read()
return ret
###Output
_____no_output_____
###Markdown
Let us illustrate all these on a small example.
###Code
def call_test() -> int:
c = 47
def sq(n: int) -> int:
return n * n
def gen(e: int) -> Generator[int, None, None]:
yield e * c
def just_x(x: Any, y: Any) -> Any:
return x
a = 42
b = gen(a)
d = list(b)[0]
xs = [1, 2, 3, 4]
ys = [sq(elem) for elem in xs if elem > 2]
return just_x(just_x(d, y=b), ys[0])
call_test()
###Output
_____no_output_____
###Markdown
We apply all our transformers on this code:
###Code
call_tree = ast.parse(inspect.getsource(call_test))
TrackCallTransformer().visit(call_tree)
TrackSetTransformer().visit(call_tree)
TrackGetTransformer().visit(call_tree)
TrackControlTransformer().visit(call_tree)
TrackReturnTransformer().visit(call_tree)
TrackParamsTransformer().visit(call_tree)
dump_tree(call_tree)
###Output
[34mdef[39;49;00m [32mcall_test[39;49;00m() ->[36mint[39;49;00m:
c = _data.set([33m'[39;49;00m[33mc[39;49;00m[33m'[39;49;00m, [34m47[39;49;00m)
[34mdef[39;49;00m [32msq[39;49;00m(n: [36mint[39;49;00m) ->[36mint[39;49;00m:
_data.param([33m'[39;49;00m[33mn[39;49;00m[33m'[39;49;00m, n, pos=[34m1[39;49;00m, last=[34mTrue[39;49;00m)
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<sq() return value>[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33mn[39;49;00m[33m'[39;49;00m, n) * _data.
get([33m'[39;49;00m[33mn[39;49;00m[33m'[39;49;00m, n))
[34mdef[39;49;00m [32mgen[39;49;00m(e: [36mint[39;49;00m) ->_data.get([33m'[39;49;00m[33mGenerator[39;49;00m[33m'[39;49;00m, Generator)[[36mint[39;49;00m, [34mNone[39;49;00m, [34mNone[39;49;00m]:
_data.param([33m'[39;49;00m[33me[39;49;00m[33m'[39;49;00m, e, pos=[34m1[39;49;00m, last=[34mTrue[39;49;00m)
[34myield[39;49;00m _data.set([33m'[39;49;00m[33m<gen() yield value>[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33me[39;49;00m[33m'[39;49;00m, e) * _data.
get([33m'[39;49;00m[33mc[39;49;00m[33m'[39;49;00m, c))
[34mdef[39;49;00m [32mjust_x[39;49;00m(x: _data.get([33m'[39;49;00m[33mAny[39;49;00m[33m'[39;49;00m, Any), y: _data.get([33m'[39;49;00m[33mAny[39;49;00m[33m'[39;49;00m, Any)) ->_data.get(
[33m'[39;49;00m[33mAny[39;49;00m[33m'[39;49;00m, Any):
_data.param([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x, pos=[34m1[39;49;00m)
_data.param([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y, pos=[34m2[39;49;00m, last=[34mTrue[39;49;00m)
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<just_x() return value>[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x))
a = _data.set([33m'[39;49;00m[33ma[39;49;00m[33m'[39;49;00m, [34m42[39;49;00m)
b = _data.set([33m'[39;49;00m[33mb[39;49;00m[33m'[39;49;00m, _data.ret(_data.call(_data.get([33m'[39;49;00m[33mgen[39;49;00m[33m'[39;49;00m, gen))(_data.
arg(_data.get([33m'[39;49;00m[33ma[39;49;00m[33m'[39;49;00m, a), pos=[34m1[39;49;00m))))
d = _data.set([33m'[39;49;00m[33md[39;49;00m[33m'[39;49;00m, _data.ret(_data.call([36mlist[39;49;00m)(_data.arg(_data.get([33m'[39;49;00m[33mb[39;49;00m[33m'[39;49;00m,
b), pos=[34m1[39;49;00m)))[[34m0[39;49;00m])
xs = _data.set([33m'[39;49;00m[33mxs[39;49;00m[33m'[39;49;00m, [[34m1[39;49;00m, [34m2[39;49;00m, [34m3[39;49;00m, [34m4[39;49;00m])
ys = _data.set([33m'[39;49;00m[33mys[39;49;00m[33m'[39;49;00m, [_data.ret(_data.call(_data.get([33m'[39;49;00m[33msq[39;49;00m[33m'[39;49;00m, sq))(_data.
arg(_data.get([33m'[39;49;00m[33melem[39;49;00m[33m'[39;49;00m, elem), pos=[34m1[39;49;00m))) [34mfor[39;49;00m elem [35min[39;49;00m _data.set([33m'[39;49;00m[33melem[39;49;00m[33m'[39;49;00m,
_data.get([33m'[39;49;00m[33mxs[39;49;00m[33m'[39;49;00m, xs)) [34mif[39;49;00m _data.get([33m'[39;49;00m[33melem[39;49;00m[33m'[39;49;00m, elem) > [34m2[39;49;00m])
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<call_test() return value>[39;49;00m[33m'[39;49;00m, _data.ret(_data.call(
_data.get([33m'[39;49;00m[33mjust_x[39;49;00m[33m'[39;49;00m, just_x))(_data.arg(_data.ret(_data.call(_data.
get([33m'[39;49;00m[33mjust_x[39;49;00m[33m'[39;49;00m, just_x))(_data.arg(_data.get([33m'[39;49;00m[33md[39;49;00m[33m'[39;49;00m, d), pos=[34m1[39;49;00m), y=_data
.arg(_data.get([33m'[39;49;00m[33mb[39;49;00m[33m'[39;49;00m, b), kw=[33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m))), pos=[34m1[39;49;00m), _data.arg(_data.get([33m'[39;49;00m[33mys[39;49;00m[33m'[39;49;00m,
ys)[[34m0[39;49;00m], pos=[34m2[39;49;00m))))
###Markdown
Again, we capture the dependencies:
###Code
class DependencyTrackerTester(DataTrackerTester):
def make_data_tracker(self) -> DependencyTracker:
return DependencyTracker(log=self.log)
with DependencyTrackerTester(call_tree, call_test, log=False) as call_deps:
call_test()
###Output
_____no_output_____
###Markdown
We see how * `a` flows into the generator `b` and into the parameter `e` of `gen()`.* `xs` flows into `elem` which in turn flows into the parameter `n` of `sq()`. Both flow into `ys`.* `just_x()` returns only its `x` argument.
###Code
call_deps.dependencies()
###Output
_____no_output_____
###Markdown
The `code()` view lists each function separately:
###Code
call_deps.dependencies().code()
###Output
* 10 [34mdef[39;49;00m [32mjust_x[39;49;00m(x: Any, y: Any) -> Any: [37m# <= <just_x() return value> (11), d (call_test:15), ys (call_test:18), b (call_test:14)[39;49;00m
* 11 [34mreturn[39;49;00m x [37m# <= x (10)[39;49;00m
1 [34mdef[39;49;00m [32mcall_test[39;49;00m() -> [36mint[39;49;00m:
* 2 c = [34m47[39;49;00m
3
4 [34mdef[39;49;00m [32msq[39;49;00m(n: [36mint[39;49;00m) -> [36mint[39;49;00m:
5 [34mreturn[39;49;00m n * n
6
7 [34mdef[39;49;00m [32mgen[39;49;00m(e: [36mint[39;49;00m) -> Generator[[36mint[39;49;00m, [34mNone[39;49;00m, [34mNone[39;49;00m]:
8 [34myield[39;49;00m e * c
9
10 [34mdef[39;49;00m [32mjust_x[39;49;00m(x: Any, y: Any) -> Any:
11 [34mreturn[39;49;00m x
12
* 13 a = [34m42[39;49;00m
* 14 b = gen(a) [37m# <= a (13)[39;49;00m
* 15 d = [36mlist[39;49;00m(b)[[34m0[39;49;00m] [37m# <= <gen() yield value> (gen:8), b (14)[39;49;00m
16
* 17 xs = [[34m1[39;49;00m, [34m2[39;49;00m, [34m3[39;49;00m, [34m4[39;49;00m]
* 18 ys = [sq(elem) [34mfor[39;49;00m elem [35min[39;49;00m xs [34mif[39;49;00m elem > [34m2[39;49;00m] [37m# <= xs (17)[39;49;00m
19
* 20 [34mreturn[39;49;00m just_x(just_x(d, y=b), ys[[34m0[39;49;00m]) [37m# <= <just_x() return value> (just_x:11)[39;49;00m
* 7 [34mdef[39;49;00m [32mgen[39;49;00m(e: [36mint[39;49;00m) -> Generator[[36mint[39;49;00m, [34mNone[39;49;00m, [34mNone[39;49;00m]: [37m# <= a (call_test:13)[39;49;00m
* 8 [34myield[39;49;00m e * c [37m# <= e (7), c (call_test:2)[39;49;00m
* 4 [34mdef[39;49;00m [32msq[39;49;00m(n: [36mint[39;49;00m) -> [36mint[39;49;00m: [37m# <= elem (call_test:18)[39;49;00m
* 5 [34mreturn[39;49;00m n * n [37m# <= n (4)[39;49;00m
###Markdown
End of Excursion Excursion: Diagnostics To check the dependencies we obtain, we perform some minimal checks on whether a referenced variable actually also occurs in the source code.
###Code
import re
class Dependencies(Dependencies):
def validate(self) -> None:
"""Perform a simple syntactic validation of dependencies"""
super().validate()
for var in self.all_vars():
source = self.source(var)
if not source:
continue
if source.startswith('<'):
continue # no source
for dep_var in self.data[var] | self.control[var]:
dep_name, dep_location = dep_var
if dep_name == DependencyTracker.TEST:
continue # dependency on <test>
if dep_name.endswith(' value>'):
if source.find('(') < 0:
warnings.warn(f"Warning: {self.format_var(var)} "
f"depends on {self.format_var(dep_var)}, "
f"but {repr(source)} does not "
f"seem to have a call")
continue
if source.startswith('def'):
continue # function call
rx = re.compile(r'\b' + dep_name + r'\b')
if rx.search(source) is None:
warnings.warn(f"{self.format_var(var)} "
f"depends on {self.format_var(dep_var)}, "
f"but {repr(dep_name)} does not occur "
f"in {repr(source)}")
###Output
_____no_output_____
###Markdown
`validate()` is automatically called whenever dependencies are output, so if you see any of its error messages, something may be wrong. End of Excursion At this point, `DependencyTracker` is complete; we have all in place to track even complex dependencies in instrumented code. Slicing Code Let us now put all these pieces together. We have a means to instrument the source code (our various `NodeTransformer` classes) and a means to track dependencies (the `DependencyTracker` class). Now comes the time to put all these things together in a single tool, which we call `Slicer`.The basic idea of `Slicer` is that you can use it as follows:```pythonwith Slicer(func_1, func_2, ...) as slicer: func(...)```which first _instruments_ the functions given in the constructor (i.e., replaces their definitions with instrumented counterparts), and then runs the code in the body, calling instrumented functions, and allowing the slicer to collect dependencies. When the body returns, the original definition of the instrumented functions is restored. An Instrumenter Base Class The basic functionality of instrumenting a number of functions (and restoring them at the end of the `with` block) comes in a `Instrumenter` base class. It invokes `instrument()` on all items to instrument; this is to be overloaded in subclasses.
###Code
class Instrumenter(StackInspector):
"""Instrument functions for dynamic tracking"""
def __init__(self, *items_to_instrument: Callable,
globals: Optional[Dict[str, Any]] = None,
log: Union[bool, int] = False) -> None:
"""
Create an instrumenter.
`items_to_instrument` is a list of items to instrument.
`globals` is a namespace to use (default: caller's globals())
"""
self.log = log
self.items_to_instrument = items_to_instrument
if globals is None:
globals = self.caller_globals()
self.globals = globals
def __enter__(self) -> Any:
"""Instrument sources"""
for item in self.items_to_instrument:
self.instrument(item)
return self
def instrument(self, item: Any) -> None:
"""Instrument `item`. To be overloaded in subclasses."""
if self.log:
print("Instrumenting", item)
###Output
_____no_output_____
###Markdown
At the end of the `with` block, we restore the given functions.
###Code
class Instrumenter(Instrumenter):
def __exit__(self, exc_type: Type, exc_value: BaseException,
traceback: TracebackType) -> Optional[bool]:
"""Restore sources"""
self.restore()
return None
def restore(self) -> None:
for item in self.items_to_instrument:
self.globals[item.__name__] = item
###Output
_____no_output_____
###Markdown
By default, an `Instrumenter` simply outputs a log message:
###Code
with Instrumenter(middle, log=True) as ins:
pass
###Output
Instrumenting <function middle at 0x7ffd525a2d08>
###Markdown
The Slicer Class The `Slicer` class comes as a subclass of `Instrumenter`. It sets its own dependency tracker (which can be overwritten by setting the `dependency_tracker` keyword argument).
###Code
class Slicer(Instrumenter):
"""Track dependencies in an execution"""
def __init__(self, *items_to_instrument: Any,
dependency_tracker: Optional[DependencyTracker] = None,
globals: Optional[Dict[str, Any]] = None, log: Union[bool, int] = False):
"""Create a slicer.
`items_to_instrument` are Python functions or modules with source code.
`dependency_tracker` is the tracker to be used (default: DependencyTracker).
`globals` is the namespace to be used(default: caller's `globals()`)
`log`=True or `log` > 0 turns on logging
"""
super().__init__(*items_to_instrument, globals=globals, log=log)
if len(items_to_instrument) == 0:
raise ValueError("Need one or more items to instrument")
if dependency_tracker is None:
dependency_tracker = DependencyTracker(log=(log > 1))
self.dependency_tracker = dependency_tracker
self.saved_dependencies = None
###Output
_____no_output_____
###Markdown
The `parse()` method parses a given item, returning its AST.
###Code
class Slicer(Slicer):
def parse(self, item: Any) -> AST:
"""Parse `item`, returning its AST"""
source_lines, lineno = inspect.getsourcelines(item)
source = "".join(source_lines)
if self.log >= 2:
print_content(source, '.py', start_line_number=lineno)
print()
print()
tree = ast.parse(source)
ast.increment_lineno(tree, lineno - 1)
return tree
###Output
_____no_output_____
###Markdown
The `transform()` method applies the list of transformers defined earlier in this chapter.
###Code
class Slicer(Slicer):
def transformers(self) -> List[NodeTransformer]:
"""List of transformers to apply. To be extended in subclasses."""
return [
TrackCallTransformer(),
TrackSetTransformer(),
TrackGetTransformer(),
TrackControlTransformer(),
TrackReturnTransformer(),
TrackParamsTransformer()
]
def transform(self, tree: AST) -> AST:
"""Apply transformers on `tree`. May be extended in subclasses."""
# Apply transformers
for transformer in self.transformers():
if self.log >= 3:
print(transformer.__class__.__name__ + ':')
transformer.visit(tree)
ast.fix_missing_locations(tree)
if self.log >= 3:
print_content(
astor.to_source(tree,
add_line_information=self.log >= 4),
'.py')
print()
print()
if 0 < self.log < 3:
print_content(astor.to_source(tree), '.py')
print()
print()
return tree
###Output
_____no_output_____
###Markdown
The `execute()` method executes the transformed tree (such that we get the new definitions). We also make the dependency tracker available for the code in the `with` block.
###Code
class Slicer(Slicer):
def execute(self, tree: AST, item: Any) -> None:
"""Compile and execute `tree`. May be extended in subclasses."""
# We pass the source file of `item` such that we can retrieve it
# when accessing the location of the new compiled code
source = cast(str, inspect.getsourcefile(item))
code = compile(tree, source, 'exec')
# Execute the code, resulting in a redefinition of item
exec(code, self.globals)
self.globals[DATA_TRACKER] = self.dependency_tracker
###Output
_____no_output_____
###Markdown
The `instrument()` method puts all these together, first parsing the item into a tree, then transforming and executing the tree.
###Code
class Slicer(Slicer):
def instrument(self, item: Any) -> None:
"""Instrument `item`, transforming its source code, and re-defining it."""
super().instrument(item)
tree = self.parse(item)
tree = self.transform(tree)
self.execute(tree, item)
###Output
_____no_output_____
###Markdown
When we restore the original definition (after the `with` block), we save the dependency tracker again.
###Code
class Slicer(Slicer):
def restore(self) -> None:
"""Restore original code."""
if DATA_TRACKER in self.globals:
self.saved_dependencies = self.globals[DATA_TRACKER]
del self.globals[DATA_TRACKER]
super().restore()
###Output
_____no_output_____
###Markdown
Three convenience functions allow us to see the dependencies as (well) dependencies, as code, and as graph. These simply invoke the respective functions on the saved dependencies.
###Code
class Slicer(Slicer):
def dependencies(self) -> Dependencies:
"""Return collected dependencies."""
if self.saved_dependencies is None:
return Dependencies({}, {})
return self.saved_dependencies.dependencies()
def code(self, *args: Any, **kwargs: Any) -> None:
"""Show code of instrumented items, annotated with dependencies."""
first = True
for item in self.items_to_instrument:
if not first:
print()
self.dependencies().code(item, *args, **kwargs) # type: ignore
first = False
def graph(self, *args: Any, **kwargs: Any) -> Digraph:
"""Show dependency graph."""
return self.dependencies().graph(*args, **kwargs) # type: ignore
def _repr_svg_(self) -> Any:
"""If the object is output in Jupyter, render dependencies as a SVG graph"""
return self.graph()._repr_svg_()
###Output
_____no_output_____
###Markdown
Let us put `Slicer` into action. We track our `middle()` function:
###Code
with Slicer(middle) as slicer:
m = middle(2, 1, 3)
m
###Output
_____no_output_____
###Markdown
These are the dependencies in string form (used when printed):
###Code
print(slicer.dependencies())
###Output
middle():
<test> (2) <= y (1), z (1)
<test> (3) <= y (1), x (1); <- <test> (2)
<test> (5) <= z (1), x (1); <- <test> (3)
<middle() return value> (6) <= y (1); <- <test> (5)
###Markdown
This is the code form:
###Code
slicer.code()
###Output
* 1 [34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z): [37m# type: ignore[39;49;00m
* 2 [34mif[39;49;00m y < z: [37m# <= y (1), z (1)[39;49;00m
* 3 [34mif[39;49;00m x < y: [37m# <= y (1), x (1); <- <test> (2)[39;49;00m
4 [34mreturn[39;49;00m y
* 5 [34melif[39;49;00m x < z: [37m# <= z (1), x (1); <- <test> (3)[39;49;00m
* 6 [34mreturn[39;49;00m y [37m# <= y (1); <- <test> (5)[39;49;00m
7 [34melse[39;49;00m:
8 [34mif[39;49;00m x > y:
9 [34mreturn[39;49;00m y
10 [34melif[39;49;00m x > z:
11 [34mreturn[39;49;00m x
12 [34mreturn[39;49;00m z
###Markdown
And this is the graph form:
###Code
slicer
###Output
_____no_output_____
###Markdown
You can also access the raw `repr()` form, which allows you to reconstruct dependencies at any time. (This is how we showed off dependencies at the beginning of this chapter, before even introducing the code that computes them.)
###Code
print(repr(slicer.dependencies()))
###Output
Dependencies(
data={
('x', (middle, 1)): set(),
('y', (middle, 1)): set(),
('z', (middle, 1)): set(),
('<test>', (middle, 2)): {('y', (middle, 1)), ('z', (middle, 1))},
('<test>', (middle, 3)): {('y', (middle, 1)), ('x', (middle, 1))},
('<test>', (middle, 5)): {('z', (middle, 1)), ('x', (middle, 1))},
('<middle() return value>', (middle, 6)): {('y', (middle, 1))}},
control={
('x', (middle, 1)): set(),
('y', (middle, 1)): set(),
('z', (middle, 1)): set(),
('<test>', (middle, 2)): set(),
('<test>', (middle, 3)): {('<test>', (middle, 2))},
('<test>', (middle, 5)): {('<test>', (middle, 3))},
('<middle() return value>', (middle, 6)): {('<test>', (middle, 5))}})
###Markdown
Diagnostics The `Slicer` constructor accepts a `log` argument (default: False), which can be set to show various intermediate results:* `log=True` (or `log=1`): Show instrumented source code* `log=2`: Also log execution* `log=3`: Also log individual transformer steps* `log=4`: Also log source line numbers More Examples Let us demonstrate our `Slicer` class on a few more examples. Square Root The `square_root()` function from [the chapter on assertions](Assertions.ipynb) demonstrates a nice interplay between data and control dependencies.
###Code
import math
from Assertions import square_root # minor dependency
###Output
_____no_output_____
###Markdown
Here is the original source code:
###Code
print_content(inspect.getsource(square_root), '.py')
###Output
[34mdef[39;49;00m [32msquare_root[39;49;00m(x): [37m# type: ignore[39;49;00m
[34massert[39;49;00m x >= [34m0[39;49;00m [37m# precondition[39;49;00m
approx = [34mNone[39;49;00m
guess = x / [34m2[39;49;00m
[34mwhile[39;49;00m approx != guess:
approx = guess
guess = (approx + x / approx) / [34m2[39;49;00m
[34massert[39;49;00m math.isclose(approx * approx, x)
[34mreturn[39;49;00m approx
###Markdown
Turning on logging shows the instrumented version:
###Code
with Slicer(square_root, log=True) as root_slicer:
y = square_root(2.0)
###Output
Instrumenting <function square_root at 0x7ffd52e9af28>
[34mdef[39;49;00m [32msquare_root[39;49;00m(x):
_data.param([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x, pos=[34m1[39;49;00m, last=[34mTrue[39;49;00m)
[34massert[39;49;00m _data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) >= [34m0[39;49;00m
approx = _data.set([33m'[39;49;00m[33mapprox[39;49;00m[33m'[39;49;00m, [34mNone[39;49;00m)
guess = _data.set([33m'[39;49;00m[33mguess[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) / [34m2[39;49;00m)
[34mwhile[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mapprox[39;49;00m[33m'[39;49;00m, approx) != _data.get([33m'[39;49;00m[33mguess[39;49;00m[33m'[39;49;00m, guess)):
[34mwith[39;49;00m _data:
approx = _data.set([33m'[39;49;00m[33mapprox[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33mguess[39;49;00m[33m'[39;49;00m, guess))
guess = _data.set([33m'[39;49;00m[33mguess[39;49;00m[33m'[39;49;00m, (_data.get([33m'[39;49;00m[33mapprox[39;49;00m[33m'[39;49;00m, approx) + _data
.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) / _data.get([33m'[39;49;00m[33mapprox[39;49;00m[33m'[39;49;00m, approx)) / [34m2[39;49;00m)
[34massert[39;49;00m _data.ret(_data.call(_data.get([33m'[39;49;00m[33mmath[39;49;00m[33m'[39;49;00m, math).isclose)(_data.arg(
_data.get([33m'[39;49;00m[33mapprox[39;49;00m[33m'[39;49;00m, approx) * _data.get([33m'[39;49;00m[33mapprox[39;49;00m[33m'[39;49;00m, approx), pos=[34m1[39;49;00m),
_data.arg(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x), pos=[34m2[39;49;00m)))
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<square_root() return value>[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33mapprox[39;49;00m[33m'[39;49;00m,
approx))
###Markdown
The dependency graph shows how `guess` and `approx` flow into each other until they are the same.
###Code
root_slicer
###Output
_____no_output_____
###Markdown
Again, we can show the code annotated with dependencies:
###Code
root_slicer.code()
###Output
* 54 [34mdef[39;49;00m [32msquare_root[39;49;00m(x): [37m# type: ignore[39;49;00m
55 [34massert[39;49;00m x >= [34m0[39;49;00m [37m# precondition[39;49;00m
56
* 57 approx = [34mNone[39;49;00m
* 58 guess = x / [34m2[39;49;00m [37m# <= x (54)[39;49;00m
* 59 [34mwhile[39;49;00m approx != guess: [37m# <= guess (61), approx (60), approx (57), guess (58)[39;49;00m
* 60 approx = guess [37m# <= guess (61), guess (58); <- <test> (59)[39;49;00m
* 61 guess = (approx + x / approx) / [34m2[39;49;00m [37m# <= x (54), approx (60); <- <test> (59)[39;49;00m
62
63 [34massert[39;49;00m math.isclose(approx * approx, x)
* 64 [34mreturn[39;49;00m approx [37m# <= approx (60)[39;49;00m
###Markdown
The astute reader may find that an `assert p` statements do not control the following code, although it would be equivalent to `if not p: raise Exception`. Why is that?
###Code
quiz("Why don't `assert` statements induce control dependencies?",
[
"We have no special handling of `assert` statements",
"We have no special handling of `raise` statements",
"Assertions are not supposed to act as controlling mechanisms",
"All of the above",
], '(1 * 1 << 1 * 1 << 1 * 1)')
###Output
_____no_output_____
###Markdown
Indeed: we treat assertions as "neutral" in the sense that they do not affect the remainder of the program – if they are turned off, they have no effect; and if they are turned on, the remaining program logic should not depend on them. (Our instrumentation also has no special treatment of `assert`, `raise`, or even `return` statements; the latter two should be handled by our `with` blocks.)
###Code
# print(repr(root_slicer))
###Output
_____no_output_____
###Markdown
Removing HTML Markup Let us come to our ongoing example, `remove_html_markup()`. This is how its instrumented code looks like:
###Code
with Slicer(remove_html_markup) as rhm_slicer:
s = remove_html_markup("<foo>bar</foo>")
###Output
_____no_output_____
###Markdown
The graph is as discussed in the introduction to this chapter:
###Code
rhm_slicer
# print(repr(rhm_slicer.dependencies()))
rhm_slicer.code()
###Output
* 238 [34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s): [37m# type: ignore[39;49;00m
* 239 tag = [34mFalse[39;49;00m
* 240 quote = [34mFalse[39;49;00m
* 241 out = [33m"[39;49;00m[33m"[39;49;00m
242
* 243 [34mfor[39;49;00m c [35min[39;49;00m s: [37m# <= s (238)[39;49;00m
244 [34massert[39;49;00m tag [35mor[39;49;00m [35mnot[39;49;00m quote
245
* 246 [34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m [35mand[39;49;00m [35mnot[39;49;00m quote: [37m# <= c (243), quote (240)[39;49;00m
* 247 tag = [34mTrue[39;49;00m [37m# <- <test> (246)[39;49;00m
* 248 [34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m [35mnot[39;49;00m quote: [37m# <= c (243), quote (240); <- <test> (246)[39;49;00m
* 249 tag = [34mFalse[39;49;00m [37m# <- <test> (248)[39;49;00m
* 250 [34melif[39;49;00m (c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m) [35mand[39;49;00m tag: [37m# <= c (243); <- <test> (248)[39;49;00m
251 quote = [35mnot[39;49;00m quote
* 252 [34melif[39;49;00m [35mnot[39;49;00m tag: [37m# <= tag (249), tag (247); <- <test> (250)[39;49;00m
* 253 out = out + c [37m# <= out (241), c (243), out (253); <- <test> (252)[39;49;00m
254
* 255 [34mreturn[39;49;00m out [37m# <= out (253)[39;49;00m
###Markdown
We can also compute slices over the dependencies:
###Code
_, start_remove_html_markup = inspect.getsourcelines(remove_html_markup)
start_remove_html_markup
slicing_criterion = ('tag', (remove_html_markup,
start_remove_html_markup + 9))
tag_deps = rhm_slicer.dependencies().backward_slice(slicing_criterion) # type: ignore
tag_deps
# repr(tag_deps)
###Output
_____no_output_____
###Markdown
Calls and Augmented Assign Our last example covers augmented assigns and data flow across function calls. We introduce two simple functions `add_to()` and `mul_with()`:
###Code
def add_to(n, m): # type: ignore
n += m
return n
def mul_with(x, y): # type: ignore
x *= y
return x
###Output
_____no_output_____
###Markdown
And we put these two together in a single call:
###Code
def test_math() -> None:
return mul_with(1, add_to(2, 3))
with Slicer(add_to, mul_with, test_math) as math_slicer:
test_math()
###Output
_____no_output_____
###Markdown
The resulting dependence graph nicely captures the data flow between these calls, notably arguments and parameters:
###Code
math_slicer
###Output
_____no_output_____
###Markdown
These are also reflected in the code view:
###Code
math_slicer.code()
###Output
* 1 [34mdef[39;49;00m [32madd_to[39;49;00m(n, m): [37m# type: ignore[39;49;00m
* 2 n += m [37m# <= n (1), m (1)[39;49;00m
* 3 [34mreturn[39;49;00m n [37m# <= n (2)[39;49;00m
* 1 [34mdef[39;49;00m [32mmul_with[39;49;00m(x, y): [37m# type: ignore # <= <add_to() return value> (add_to:3)[39;49;00m
* 2 x *= y [37m# <= y (1), x (1)[39;49;00m
* 3 [34mreturn[39;49;00m x [37m# <= x (2)[39;49;00m
1 [34mdef[39;49;00m [32mtest_math[39;49;00m() -> [34mNone[39;49;00m:
* 2 [34mreturn[39;49;00m mul_with([34m1[39;49;00m, add_to([34m2[39;49;00m, [34m3[39;49;00m)) [37m# <= <mul_with() return value> (mul_with:3)[39;49;00m
###Markdown
More Applications \todo{Present some more applications}:* Learning across multiple (passing) runs* Detecting deviations* Statistical debugging with dependencies Things that do not Work Our slicer (and especially the underlying dependency tracker) is still a proof of concept. A number of Python features are not or only partially supported, and/or hardly tested:* __Exceptions__ are not handled. The code assumes that for every `call()`, there is a matching `ret()`; when exceptions break this, dependencies across function calls and arguments may be assigned incorrectly.* __Multiple definitions on a single line__ as in `x = y; x = 1` are not handled correctly. Our implementation assumes that there is one statement per line.* __If-Expressions__ (`y = 1 if x else 0`) do not create control dependencies, as there are no statements to control. Neither do `if` clauses in comprehensions.* __Asynchronous functions__ (`async`, `await`) are not tested.In these cases, the instrumentation and the underlying dependency tracker may fail to identify control and/or data flows. The semantics of the code, however, should always stay unchanged. Synopsis This chapter provides a `Slicer` class to automatically determine and visualize dynamic dependencies. When we say that a variable $x$ depends on a variable $y$ (written $x \leftarrow y$), we distinguish two kinds of dependencies:* **data dependencies**: $x$ obtains its value from a computation involving the value of $y$.* **control dependencies**: $x$ obtains its value because of a computation involving the value of $y$.Such dependencies are crucial for debugging, as they allow to determine the origins of individual values (and notably incorrect values). To determine dynamic dependencies in a function `func` and its callees `func1`, `func2`, etc., use```pythonwith Slicer(func, func1, func2) as slicer: ```and then `slicer.graph()` or `slicer.code()` to examine dependencies. Here is an example. The `demo()` function computes some number from `x`:
###Code
def demo(x: int) -> int:
z = x
while x <= z <= 64:
z *= 2
return z
###Output
_____no_output_____
###Markdown
By using `with Slicer(demo)`, we first instrument `demo()` and then execute it:
###Code
with Slicer(demo) as slicer:
demo(10)
###Output
_____no_output_____
###Markdown
After execution is complete, you can output `slicer` to visualize the dependencies as graph. Data dependencies are shown as black solid edges; control dependencies are shown as grey dashed edges. We see how the parameter `x` flows into `z`, which is returned after some computation that is control dependent on a `` involving `z`.
###Code
slicer
###Output
_____no_output_____
###Markdown
An alternate representation is `slicer.code()`, annotating the instrumented source code with (backward) dependencies. Data dependencies are shown with `<=`, control dependencies with `<-`; locations (lines) are shown in parentheses.
###Code
slicer.code()
###Output
* 1 [34mdef[39;49;00m [32mdemo[39;49;00m(x: [36mint[39;49;00m) -> [36mint[39;49;00m:
* 2 z = x [37m# <= x (1)[39;49;00m
* 3 [34mwhile[39;49;00m x <= z <= [34m64[39;49;00m: [37m# <= z (4), z (2), x (1)[39;49;00m
* 4 z *= [34m2[39;49;00m [37m# <= z (4), z (2); <- <test> (3)[39;49;00m
* 5 [34mreturn[39;49;00m z [37m# <= z (4)[39;49;00m
###Markdown
Dependencies can also be retrieved programmatically. The `dependencies()` method returns a `Dependencies` object encapsulating the dependency graph. The method `all_vars()` returns all variables in the dependency graph. Each variable is encoded as a pair (_name_, _location_) where _location_ is a pair (_codename_, _lineno_).
###Code
slicer.dependencies().all_vars()
###Output
_____no_output_____
###Markdown
`code()` and `graph()` methods can also be applied on dependencies. The method `backward_slice(var)` returns a backward slice for the given variable. To retrieve where `z` in Line 2 came from, use:
###Code
_, start_demo = inspect.getsourcelines(demo)
start_demo
slicer.dependencies().backward_slice(('z', (demo, start_demo + 1))).graph() # type: ignore
###Output
_____no_output_____
###Markdown
Here are the classes defined in this chapter. A `Slicer` instruments a program, using a `DependencyTracker` at run time to collect `Dependencies`.
###Code
# ignore
from ClassDiagram import display_class_hierarchy, class_tree
# ignore
assert class_tree(Slicer)[0][0] == Slicer
# ignore
display_class_hierarchy([Slicer, DependencyTracker, StackInspector, Dependencies],
abstract_classes=[
StackInspector,
Instrumenter
],
public_methods=[
StackInspector.caller_frame,
StackInspector.caller_function,
StackInspector.caller_globals,
StackInspector.caller_locals,
StackInspector.caller_location,
StackInspector.search_frame,
StackInspector.search_func,
Instrumenter.__init__,
Instrumenter.__enter__,
Instrumenter.__exit__,
Instrumenter.instrument,
Slicer.__init__,
Slicer.code,
Slicer.dependencies,
Slicer.graph,
Slicer._repr_svg_,
DataTracker.__init__,
DataTracker.__enter__,
DataTracker.__exit__,
DataTracker.arg,
DataTracker.augment,
DataTracker.call,
DataTracker.get,
DataTracker.param,
DataTracker.ret,
DataTracker.set,
DataTracker.test,
DataTracker.__repr__,
DependencyTracker.__init__,
DependencyTracker.__enter__,
DependencyTracker.__exit__,
DependencyTracker.arg,
# DependencyTracker.augment,
DependencyTracker.call,
DependencyTracker.get,
DependencyTracker.param,
DependencyTracker.ret,
DependencyTracker.set,
DependencyTracker.test,
DependencyTracker.__repr__,
Dependencies.__init__,
Dependencies.__repr__,
Dependencies.__str__,
Dependencies._repr_svg_,
Dependencies.code,
Dependencies.graph,
Dependencies.backward_slice,
Dependencies.all_functions,
Dependencies.all_vars,
],
project='debuggingbook')
###Output
_____no_output_____
###Markdown
Excursion: Transformer Stress Test We stress test our transformers by instrumenting, transforming, and compiling a number of modules.
###Code
import Assertions # minor dependency
import Debugger # minor dependency
for module in [Assertions, Debugger, inspect, ast, astor]:
module_tree = ast.parse(inspect.getsource(module))
TrackCallTransformer().visit(module_tree)
TrackSetTransformer().visit(module_tree)
TrackGetTransformer().visit(module_tree)
TrackControlTransformer().visit(module_tree)
TrackReturnTransformer().visit(module_tree)
TrackParamsTransformer().visit(module_tree)
# dump_tree(module_tree)
ast.fix_missing_locations(module_tree) # Must run this before compiling
module_code = compile(module_tree, '<stress_test>', 'exec')
print(f"{repr(module.__name__)} instrumented successfully.")
###Output
'Assertions' instrumented successfully.
'Debugger' instrumented successfully.
'inspect' instrumented successfully.
'ast' instrumented successfully.
'astor' instrumented successfully.
###Markdown
End of Excursion Our next step will now be not only to _log_ these events, but to actually construct _dependencies_ from them. Tracking Dependencies To construct dependencies from variable accesses, we subclass `DataTracker` into `DependencyTracker` – a class that actually keeps track of all these dependencies. Its constructor initializes a number of variables we will discuss below.
###Code
class DependencyTracker(DataTracker):
"""Track dependencies during execution"""
def __init__(self, *args: Any, **kwargs: Any) -> None:
"""Constructor. Arguments are passed to DataTracker.__init__()"""
super().__init__(*args, **kwargs)
self.origins: Dict[str, Location] = {} # Where current variables were last set
self.data_dependencies: Dependency = {} # As with Dependencies, above
self.control_dependencies: Dependency = {}
self.last_read: List[str] = [] # List of last read variables
self.last_checked_location = (StackInspector.unknown, 1)
self._ignore_location_change = False
self.data: List[List[str]] = [[]] # Data stack
self.control: List[List[str]] = [[]] # Control stack
self.frames: List[Dict[Union[int, str], Any]] = [{}] # Argument stack
self.args: Dict[Union[int, str], Any] = {} # Current args
###Output
_____no_output_____
###Markdown
Data DependenciesThe first job of our `DependencyTracker` is to construct dependencies between _read_ and _written_ variables. Reading VariablesAs in `DataTracker`, the key method of `DependencyTracker` again is `get()`, invoked as `_data.get('x', x)` whenever a variable `x` is read. First and foremost, it appends the name of the read variable to the list `last_read`.
###Code
class DependencyTracker(DependencyTracker):
def get(self, name: str, value: Any) -> Any:
"""Track a read access for variable `name` with value `value`"""
self.check_location()
self.last_read.append(name)
return super().get(name, value)
def check_location(self) -> None:
pass # More on that below
x = 5
y = 3
_test_data = DependencyTracker(log=True)
_test_data.get('x', x) + _test_data.get('y', y)
_test_data.last_read
###Output
_____no_output_____
###Markdown
Checking Locations However, before appending the read variable to `last_read`, `_data.get()` does one more thing. By invoking `check_location()`, it clears the `last_read` list if we have reached a new line in the execution. This avoids situations such as```pythonxyz = a + b```where `x` and `y` are, well, read, but do not affect the last line. Therefore, with every new line, the list of last read lines is cleared.
###Code
class DependencyTracker(DependencyTracker):
def clear_read(self) -> None:
"""Clear set of read variables"""
if self.log:
direct_caller = inspect.currentframe().f_back.f_code.co_name # type: ignore
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: "
f"clearing read variables {self.last_read} "
f"(from {direct_caller})")
self.last_read = []
def check_location(self) -> None:
"""If we are in a new location, clear set of read variables"""
location = self.caller_location()
func, lineno = location
last_func, last_lineno = self.last_checked_location
if self.last_checked_location != location:
if self._ignore_location_change:
self._ignore_location_change = False
elif func.__name__.startswith('<'):
# Entering list comprehension, eval(), exec(), ...
pass
elif last_func.__name__.startswith('<'):
# Exiting list comprehension, eval(), exec(), ...
pass
else:
# Standard case
self.clear_read()
self.last_checked_location = location
###Output
_____no_output_____
###Markdown
Two methods can suppress this reset of the `last_read` list: * `ignore_next_location_change()` suppresses the reset for the next line. This is useful when returning from a function, when the return value is still in the list of "read" variables.* `ignore_location_change()` suppresses the reset for the current line. This is useful if we already have returned from a function call.
###Code
class DependencyTracker(DependencyTracker):
def ignore_next_location_change(self) -> None:
self._ignore_location_change = True
def ignore_location_change(self) -> None:
self.last_checked_location = self.caller_location()
###Output
_____no_output_____
###Markdown
Watch how `DependencyTracker` resets `last_read` when a new line is executed:
###Code
_test_data = DependencyTracker()
_test_data.get('x', x) + _test_data.get('y', y)
_test_data.last_read
a = 42
b = -1
_test_data.get('a', a) + _test_data.get('b', b)
_test_data.last_read
###Output
_____no_output_____
###Markdown
Setting VariablesThe method `set()` creates dependencies. It is invoked as `_data.set('x', value)` whenever a variable `x` is set. First and foremost, it takes the list of variables read `last_read`, and for each of the variables $v$, it takes their origin $o$ (the place where they were last set) and appends the pair ($v$, $o$) to the list of data dependencies. It then does a similar thing with control dependencies (more on these below), and finally marks (in `self.origins`) the current location of $v$.
###Code
import itertools
class DependencyTracker(DependencyTracker):
TEST = '<test>' # Name of pseudo-variables for testing conditions
def set(self, name: str, value: Any, loads: Optional[Set[str]] = None) -> Any:
"""Add a dependency for `name` = `value`"""
def add_dependencies(dependencies: Set[Node],
vars_read: List[str], tp: str) -> None:
"""Add origins of `vars_read` to `dependencies`."""
for var_read in vars_read:
if var_read in self.origins:
if var_read == self.TEST and tp == "data":
# Can't have data dependencies on conditions
continue
origin = self.origins[var_read]
dependencies.add((var_read, origin))
if self.log:
origin_func, origin_lineno = origin
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: "
f"new {tp} dependency: "
f"{name} <= {var_read} "
f"({origin_func.__name__}:{origin_lineno})")
self.check_location()
ret = super().set(name, value)
location = self.caller_location()
add_dependencies(self.data_dependencies.setdefault
((name, location), set()),
self.last_read, tp="data")
add_dependencies(self.control_dependencies.setdefault
((name, location), set()),
cast(List[str], itertools.chain.from_iterable(self.control)),
tp="control")
self.origins[name] = location
# Reset read info for next line
self.last_read = [name]
return ret
def dependencies(self) -> Dependencies:
"""Return dependencies"""
return Dependencies(self.data_dependencies,
self.control_dependencies)
###Output
_____no_output_____
###Markdown
Let us illustrate `set()` by example. Here's a set of variables read and written:
###Code
_test_data = DependencyTracker()
x = _test_data.set('x', 1)
y = _test_data.set('y', _test_data.get('x', x))
z = _test_data.set('z', _test_data.get('x', x) + _test_data.get('y', y))
###Output
_____no_output_____
###Markdown
The attribute `origins` saves for each variable where it was last written:
###Code
_test_data.origins
###Output
_____no_output_____
###Markdown
The attribute `data_dependencies` tracks for each variable the variables it was read from:
###Code
_test_data.data_dependencies
###Output
_____no_output_____
###Markdown
Hence, the above code already gives us a small dependency graph:
###Code
# ignore
_test_data.dependencies().graph()
###Output
_____no_output_____
###Markdown
In the remainder of this section, we define methods to* track control dependencies (`test()`, `__enter__()`, `__exit__()`)* track function calls and returns (`call()`, `ret()`)* track function arguments (`arg()`, `param()`)* check the validity of our dependencies (`validate()`).Like our `get()` and `set()` methods above, these work by refining the appropriate methods defined in the `DataTracker` class, building on our `NodeTransformer` transformations. Excursion: Control Dependencies Let us detail control dependencies. As discussed with `DataTracker()`, we invoke `test()` methods for all control conditions, and place the controlled blocks into `with` clauses. The `test()` method simply sets a `` variable; this also places it in `last_read`.
###Code
class DependencyTracker(DependencyTracker):
def test(self, value: Any) -> Any:
"""Track a test for condition `value`"""
self.set(self.TEST, value)
return super().test(value)
###Output
_____no_output_____
###Markdown
When entering a `with` block, the set of `last_read` variables holds the `` variable read. We save it in the `control` stack, with the effect of any further variables written now being marked as controlled by ``.
###Code
class DependencyTracker(DependencyTracker):
def __enter__(self) -> Any:
"""Track entering an if/while/for block"""
self.control.append(self.last_read)
self.clear_read()
return super().__enter__()
###Output
_____no_output_____
###Markdown
When we exit the `with` block, we restore earlier `last_read` values, preparing for `else` blocks.
###Code
class DependencyTracker(DependencyTracker):
def __exit__(self, exc_type: Type, exc_value: BaseException,
traceback: TracebackType) -> Optional[bool]:
"""Track exiting an if/while/for block"""
self.clear_read()
self.last_read = self.control.pop()
self.ignore_next_location_change()
return super().__exit__(exc_type, exc_value, traceback)
###Output
_____no_output_____
###Markdown
Here's an example of all these parts in action:
###Code
_test_data = DependencyTracker()
x = _test_data.set('x', 1)
y = _test_data.set('y', _test_data.get('x', x))
if _test_data.test(_test_data.get('x', x) >= _test_data.get('y', y)):
with _test_data:
z = _test_data.set('z',
_test_data.get('x', x) + _test_data.get('y', y))
_test_data.control_dependencies
###Output
_____no_output_____
###Markdown
The control dependency for `z` is reflected in the dependency graph:
###Code
# ignore
_test_data.dependencies()
###Output
_____no_output_____
###Markdown
End of Excursion Excursion: Calls and Returns
###Code
import copy
###Output
_____no_output_____
###Markdown
To handle complex expressions involving functions, we introduce a _data stack_. Every time we invoke a function `func` (`call()` is invoked), we save the list of current variables read `last_read` on the `data` stack; when we return (`ret()` is invoked), we restore `last_read`. This also ensures that only those variables read while evaluating arguments will flow into the function call.
###Code
class DependencyTracker(DependencyTracker):
def call(self, func: Callable) -> Callable:
"""Track a call of function `func`"""
super().call(func)
if inspect.isgeneratorfunction(func):
return self.call_generator(func)
# Save context
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: "
f"saving read variables {self.last_read}")
self.data.append(self.last_read)
self.clear_read()
self.ignore_next_location_change()
self.frames.append(self.args)
self.args = {}
return func
class DependencyTracker(DependencyTracker):
def ret(self, value: Any) -> Any:
"""Track a function return"""
super().ret(value)
if self.in_generator():
return self.ret_generator(value)
# Restore old context and add return value
ret_name = None
for var in self.last_read:
if var.startswith("<"): # "<return value>"
ret_name = var
self.last_read = self.data.pop()
if ret_name is not None:
self.last_read.append(ret_name)
self.ignore_location_change()
self.args = self.frames.pop()
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: "
f"restored read variables {self.last_read}")
return value
###Output
_____no_output_____
###Markdown
Generator functions (those which `yield` a value) are not "called" in the sense that Python transfers control to them; instead, a "call" to a generator function creates a generator that is evaluated on demand. We mark generator function "calls" by saving `None` on the stacks. When the generator function returns the generator, we wrap the generator such that the arguments are being restored when it is invoked.
###Code
class DependencyTracker(DependencyTracker):
def in_generator(self) -> bool:
"""True if we are calling a generator function"""
return len(self.data) > 0 and self.data[-1] is None
def call_generator(self, func: Callable) -> Callable:
"""Track a call of a generator function"""
# Mark the fact that we're in a generator with `None` values
self.data.append(None) # type: ignore
self.frames.append(None) # type: ignore
assert self.in_generator()
self.clear_read()
return func
def ret_generator(self, generator: Any) -> Any:
"""Track the return of a generator function"""
# Pop the two 'None' values pushed earlier
self.data.pop()
self.frames.pop()
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: "
f"wrapping generator {generator} (args={self.args})")
# At this point, we already have collected the args.
# The returned generator depends on all of them.
for arg in self.args:
self.last_read += self.args[arg]
# Wrap the generator such that the args are restored
# when it is actually invoked, such that we can map them
# to parameters.
saved_args = copy.deepcopy(self.args)
def wrapper() -> Generator[Any, None, None]:
self.args = saved_args
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: "
f"calling generator (args={self.args})")
self.ignore_next_location_change()
yield from generator
return wrapper()
###Output
_____no_output_____
###Markdown
We see an example of how function calls and returns work in conjunction with function arguments, discussed in the next section. End of Excursion Excursion: Function Arguments Finally, we handle parameters and arguments. The `args` stack holds the current stack of function arguments, holding the `last_read` variable for each argument.
###Code
class DependencyTracker(DependencyTracker):
def arg(self, value: Any, pos: Optional[int] = None, kw: Optional[str] = None) -> Any:
"""
Track passing an argument `value`
(with given position `pos` 1..n or keyword `kw`)
"""
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: "
f"saving args read {self.last_read}")
if pos:
self.args[pos] = self.last_read
if kw:
self.args[kw] = self.last_read
self.clear_read()
return super().arg(value, pos, kw)
###Output
_____no_output_____
###Markdown
When accessing the arguments (with `param()`), we can retrieve this set of read variables for each argument.
###Code
class DependencyTracker(DependencyTracker):
def param(self, name: str, value: Any,
pos: Optional[int] = None, vararg: str = "", last: bool = False) -> Any:
"""
Track getting a parameter `name` with value `value`
(with given position `pos`).
vararg parameters are indicated by setting `varargs` to
'*' (*args) or '**' (**kwargs)
"""
self.clear_read()
if vararg == '*':
# We overapproximate by setting `args` to _all_ positional args
for index in self.args:
if isinstance(index, int) and pos is not None and index >= pos:
self.last_read += self.args[index]
elif vararg == '**':
# We overapproximate by setting `kwargs` to _all_ passed keyword args
for index in self.args:
if isinstance(index, str):
self.last_read += self.args[index]
elif name in self.args:
self.last_read = self.args[name]
elif pos in self.args:
self.last_read = self.args[pos]
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: "
f"restored params read {self.last_read}")
self.ignore_location_change()
ret = super().param(name, value, pos)
if last:
self.clear_read()
return ret
###Output
_____no_output_____
###Markdown
Let us illustrate all these on a small example.
###Code
def call_test() -> int:
c = 47
def sq(n: int) -> int:
return n * n
def gen(e: int) -> Generator[int, None, None]:
yield e * c
def just_x(x: Any, y: Any) -> Any:
return x
a = 42
b = gen(a)
d = list(b)[0]
xs = [1, 2, 3, 4]
ys = [sq(elem) for elem in xs if elem > 2]
return just_x(just_x(d, y=b), ys[0])
call_test()
###Output
_____no_output_____
###Markdown
We apply all our transformers on this code:
###Code
call_tree = ast.parse(inspect.getsource(call_test))
TrackCallTransformer().visit(call_tree)
TrackSetTransformer().visit(call_tree)
TrackGetTransformer().visit(call_tree)
TrackControlTransformer().visit(call_tree)
TrackReturnTransformer().visit(call_tree)
TrackParamsTransformer().visit(call_tree)
dump_tree(call_tree)
###Output
[34mdef[39;49;00m [32mcall_test[39;49;00m() ->[36mint[39;49;00m:
c = _data.set([33m'[39;49;00m[33mc[39;49;00m[33m'[39;49;00m, [34m47[39;49;00m)
[34mdef[39;49;00m [32msq[39;49;00m(n: [36mint[39;49;00m) ->[36mint[39;49;00m:
_data.param([33m'[39;49;00m[33mn[39;49;00m[33m'[39;49;00m, n, pos=[34m1[39;49;00m, last=[34mTrue[39;49;00m)
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<sq() return value>[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33mn[39;49;00m[33m'[39;49;00m, n) * _data.
get([33m'[39;49;00m[33mn[39;49;00m[33m'[39;49;00m, n))
[34mdef[39;49;00m [32mgen[39;49;00m(e: [36mint[39;49;00m) ->_data.get([33m'[39;49;00m[33mGenerator[39;49;00m[33m'[39;49;00m, Generator)[[36mint[39;49;00m, [34mNone[39;49;00m, [34mNone[39;49;00m]:
_data.param([33m'[39;49;00m[33me[39;49;00m[33m'[39;49;00m, e, pos=[34m1[39;49;00m, last=[34mTrue[39;49;00m)
[34myield[39;49;00m _data.set([33m'[39;49;00m[33m<gen() yield value>[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33me[39;49;00m[33m'[39;49;00m, e) * _data.
get([33m'[39;49;00m[33mc[39;49;00m[33m'[39;49;00m, c))
[34mdef[39;49;00m [32mjust_x[39;49;00m(x: _data.get([33m'[39;49;00m[33mAny[39;49;00m[33m'[39;49;00m, Any), y: _data.get([33m'[39;49;00m[33mAny[39;49;00m[33m'[39;49;00m, Any)) ->_data.get(
[33m'[39;49;00m[33mAny[39;49;00m[33m'[39;49;00m, Any):
_data.param([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x, pos=[34m1[39;49;00m)
_data.param([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y, pos=[34m2[39;49;00m, last=[34mTrue[39;49;00m)
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<just_x() return value>[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x))
a = _data.set([33m'[39;49;00m[33ma[39;49;00m[33m'[39;49;00m, [34m42[39;49;00m)
b = _data.set([33m'[39;49;00m[33mb[39;49;00m[33m'[39;49;00m, _data.ret(_data.call(_data.get([33m'[39;49;00m[33mgen[39;49;00m[33m'[39;49;00m, gen))(_data.
arg(_data.get([33m'[39;49;00m[33ma[39;49;00m[33m'[39;49;00m, a), pos=[34m1[39;49;00m))))
d = _data.set([33m'[39;49;00m[33md[39;49;00m[33m'[39;49;00m, _data.ret(_data.call([36mlist[39;49;00m)(_data.arg(_data.get([33m'[39;49;00m[33mb[39;49;00m[33m'[39;49;00m,
b), pos=[34m1[39;49;00m)))[[34m0[39;49;00m])
xs = _data.set([33m'[39;49;00m[33mxs[39;49;00m[33m'[39;49;00m, [[34m1[39;49;00m, [34m2[39;49;00m, [34m3[39;49;00m, [34m4[39;49;00m])
ys = _data.set([33m'[39;49;00m[33mys[39;49;00m[33m'[39;49;00m, [_data.ret(_data.call(_data.get([33m'[39;49;00m[33msq[39;49;00m[33m'[39;49;00m, sq))(_data.
arg(_data.get([33m'[39;49;00m[33melem[39;49;00m[33m'[39;49;00m, elem), pos=[34m1[39;49;00m))) [34mfor[39;49;00m elem [35min[39;49;00m _data.set([33m'[39;49;00m[33melem[39;49;00m[33m'[39;49;00m,
_data.get([33m'[39;49;00m[33mxs[39;49;00m[33m'[39;49;00m, xs)) [34mif[39;49;00m _data.get([33m'[39;49;00m[33melem[39;49;00m[33m'[39;49;00m, elem) > [34m2[39;49;00m])
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<call_test() return value>[39;49;00m[33m'[39;49;00m, _data.ret(_data.call(
_data.get([33m'[39;49;00m[33mjust_x[39;49;00m[33m'[39;49;00m, just_x))(_data.arg(_data.ret(_data.call(_data.
get([33m'[39;49;00m[33mjust_x[39;49;00m[33m'[39;49;00m, just_x))(_data.arg(_data.get([33m'[39;49;00m[33md[39;49;00m[33m'[39;49;00m, d), pos=[34m1[39;49;00m), y=_data
.arg(_data.get([33m'[39;49;00m[33mb[39;49;00m[33m'[39;49;00m, b), kw=[33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m))), pos=[34m1[39;49;00m), _data.arg(_data.get([33m'[39;49;00m[33mys[39;49;00m[33m'[39;49;00m,
ys)[[34m0[39;49;00m], pos=[34m2[39;49;00m))))
###Markdown
Again, we capture the dependencies:
###Code
class DependencyTrackerTester(DataTrackerTester):
def make_data_tracker(self) -> DependencyTracker:
return DependencyTracker(log=self.log)
with DependencyTrackerTester(call_tree, call_test, log=False) as call_deps:
call_test()
###Output
_____no_output_____
###Markdown
We see how * `a` flows into the generator `b` and into the parameter `e` of `gen()`.* `xs` flows into `elem` which in turn flows into the parameter `n` of `sq()`. Both flow into `ys`.* `just_x()` returns only its `x` argument.
###Code
call_deps.dependencies()
###Output
_____no_output_____
###Markdown
The `code()` view lists each function separately:
###Code
call_deps.dependencies().code()
###Output
1 [34mdef[39;49;00m [32mcall_test[39;49;00m() -> [36mint[39;49;00m:
* 2 c = [34m47[39;49;00m
3
4 [34mdef[39;49;00m [32msq[39;49;00m(n: [36mint[39;49;00m) -> [36mint[39;49;00m:
5 [34mreturn[39;49;00m n * n
6
7 [34mdef[39;49;00m [32mgen[39;49;00m(e: [36mint[39;49;00m) -> Generator[[36mint[39;49;00m, [34mNone[39;49;00m, [34mNone[39;49;00m]:
8 [34myield[39;49;00m e * c
9
10 [34mdef[39;49;00m [32mjust_x[39;49;00m(x: Any, y: Any) -> Any:
11 [34mreturn[39;49;00m x
12
* 13 a = [34m42[39;49;00m
* 14 b = gen(a) [37m# <= a (13)[39;49;00m
* 15 d = [36mlist[39;49;00m(b)[[34m0[39;49;00m] [37m# <= <gen() yield value> (gen:8), b (14)[39;49;00m
16
* 17 xs = [[34m1[39;49;00m, [34m2[39;49;00m, [34m3[39;49;00m, [34m4[39;49;00m]
* 18 ys = [sq(elem) [34mfor[39;49;00m elem [35min[39;49;00m xs [34mif[39;49;00m elem > [34m2[39;49;00m] [37m# <= xs (17)[39;49;00m
19
* 20 [34mreturn[39;49;00m just_x(just_x(d, y=b), ys[[34m0[39;49;00m]) [37m# <= <just_x() return value> (just_x:11)[39;49;00m
* 4 [34mdef[39;49;00m [32msq[39;49;00m(n: [36mint[39;49;00m) -> [36mint[39;49;00m: [37m# <= elem (call_test:18)[39;49;00m
* 5 [34mreturn[39;49;00m n * n [37m# <= n (4)[39;49;00m
* 10 [34mdef[39;49;00m [32mjust_x[39;49;00m(x: Any, y: Any) -> Any: [37m# <= <just_x() return value> (11), d (call_test:15), ys (call_test:18), b (call_test:14)[39;49;00m
* 11 [34mreturn[39;49;00m x [37m# <= x (10)[39;49;00m
* 7 [34mdef[39;49;00m [32mgen[39;49;00m(e: [36mint[39;49;00m) -> Generator[[36mint[39;49;00m, [34mNone[39;49;00m, [34mNone[39;49;00m]: [37m# <= a (call_test:13)[39;49;00m
* 8 [34myield[39;49;00m e * c [37m# <= e (7), c (call_test:2)[39;49;00m
###Markdown
End of Excursion Excursion: Diagnostics To check the dependencies we obtain, we perform some minimal checks on whether a referenced variable actually also occurs in the source code.
###Code
import re
class Dependencies(Dependencies):
def validate(self) -> None:
"""Perform a simple syntactic validation of dependencies"""
super().validate()
for var in self.all_vars():
source = self.source(var)
if not source:
continue
if source.startswith('<'):
continue # no source
for dep_var in self.data[var] | self.control[var]:
dep_name, dep_location = dep_var
if dep_name == DependencyTracker.TEST:
continue # dependency on <test>
if dep_name.endswith(' value>'):
if source.find('(') < 0:
warnings.warn(f"Warning: {self.format_var(var)} "
f"depends on {self.format_var(dep_var)}, "
f"but {repr(source)} does not "
f"seem to have a call")
continue
if source.startswith('def'):
continue # function call
rx = re.compile(r'\b' + dep_name + r'\b')
if rx.search(source) is None:
warnings.warn(f"{self.format_var(var)} "
f"depends on {self.format_var(dep_var)}, "
f"but {repr(dep_name)} does not occur "
f"in {repr(source)}")
###Output
_____no_output_____
###Markdown
`validate()` is automatically called whenever dependencies are output, so if you see any of its error messages, something may be wrong. End of Excursion At this point, `DependencyTracker` is complete; we have all in place to track even complex dependencies in instrumented code. Slicing Code Let us now put all these pieces together. We have a means to instrument the source code (our various `NodeTransformer` classes) and a means to track dependencies (the `DependencyTracker` class). Now comes the time to put all these things together in a single tool, which we call `Slicer`.The basic idea of `Slicer` is that you can use it as follows:```pythonwith Slicer(func_1, func_2, ...) as slicer: func(...)```which first _instruments_ the functions given in the constructor (i.e., replaces their definitions with instrumented counterparts), and then runs the code in the body, calling instrumented functions, and allowing the slicer to collect dependencies. When the body returns, the original definition of the instrumented functions is restored. An Instrumenter Base Class The basic functionality of instrumenting a number of functions (and restoring them at the end of the `with` block) comes in a `Instrumenter` base class. It invokes `instrument()` on all items to instrument; this is to be overloaded in subclasses.
###Code
class Instrumenter(StackInspector):
"""Instrument functions for dynamic tracking"""
def __init__(self, *items_to_instrument: Callable,
globals: Optional[Dict[str, Any]] = None,
log: Union[bool, int] = False) -> None:
"""
Create an instrumenter.
`items_to_instrument` is a list of items to instrument.
`globals` is a namespace to use (default: caller's globals())
"""
self.log = log
self.items_to_instrument = items_to_instrument
if globals is None:
globals = self.caller_globals()
self.globals = globals
def __enter__(self) -> Any:
"""Instrument sources"""
for item in self.items_to_instrument:
self.instrument(item)
return self
def instrument(self, item: Any) -> None:
"""Instrument `item`. To be overloaded in subclasses."""
if self.log:
print("Instrumenting", item)
###Output
_____no_output_____
###Markdown
At the end of the `with` block, we restore the given functions.
###Code
class Instrumenter(Instrumenter):
def __exit__(self, exc_type: Type, exc_value: BaseException,
traceback: TracebackType) -> Optional[bool]:
"""Restore sources"""
self.restore()
return None
def restore(self) -> None:
for item in self.items_to_instrument:
self.globals[item.__name__] = item
###Output
_____no_output_____
###Markdown
By default, an `Instrumenter` simply outputs a log message:
###Code
with Instrumenter(middle, log=True) as ins:
pass
###Output
Instrumenting <function middle at 0x7f943ccb5e18>
###Markdown
The Slicer Class The `Slicer` class comes as a subclass of `Instrumenter`. It sets its own dependency tracker (which can be overwritten by setting the `dependency_tracker` keyword argument).
###Code
class Slicer(Instrumenter):
"""Track dependencies in an execution"""
def __init__(self, *items_to_instrument: Any,
dependency_tracker: Optional[DependencyTracker] = None,
globals: Optional[Dict[str, Any]] = None, log: Union[bool, int] = False):
"""Create a slicer.
`items_to_instrument` are Python functions or modules with source code.
`dependency_tracker` is the tracker to be used (default: DependencyTracker).
`globals` is the namespace to be used(default: caller's `globals()`)
`log`=True or `log` > 0 turns on logging
"""
super().__init__(*items_to_instrument, globals=globals, log=log)
if len(items_to_instrument) == 0:
raise ValueError("Need one or more items to instrument")
if dependency_tracker is None:
dependency_tracker = DependencyTracker(log=(log > 1))
self.dependency_tracker = dependency_tracker
self.saved_dependencies = None
###Output
_____no_output_____
###Markdown
The `parse()` method parses a given item, returning its AST.
###Code
class Slicer(Slicer):
def parse(self, item: Any) -> AST:
"""Parse `item`, returning its AST"""
source_lines, lineno = inspect.getsourcelines(item)
source = "".join(source_lines)
if self.log >= 2:
print_content(source, '.py', start_line_number=lineno)
print()
print()
tree = ast.parse(source)
ast.increment_lineno(tree, lineno - 1)
return tree
###Output
_____no_output_____
###Markdown
The `transform()` method applies the list of transformers defined earlier in this chapter.
###Code
class Slicer(Slicer):
def transformers(self) -> List[NodeTransformer]:
"""List of transformers to apply. To be extended in subclasses."""
return [
TrackCallTransformer(),
TrackSetTransformer(),
TrackGetTransformer(),
TrackControlTransformer(),
TrackReturnTransformer(),
TrackParamsTransformer()
]
def transform(self, tree: AST) -> AST:
"""Apply transformers on `tree`. May be extended in subclasses."""
# Apply transformers
for transformer in self.transformers():
if self.log >= 3:
print(transformer.__class__.__name__ + ':')
transformer.visit(tree)
ast.fix_missing_locations(tree)
if self.log >= 3:
print_content(
astor.to_source(tree,
add_line_information=self.log >= 4),
'.py')
print()
print()
if 0 < self.log < 3:
print_content(astor.to_source(tree), '.py')
print()
print()
return tree
###Output
_____no_output_____
###Markdown
The `execute()` method executes the transformed tree (such that we get the new definitions). We also make the dependency tracker available for the code in the `with` block.
###Code
class Slicer(Slicer):
def execute(self, tree: AST, item: Any) -> None:
"""Compile and execute `tree`. May be extended in subclasses."""
# We pass the source file of `item` such that we can retrieve it
# when accessing the location of the new compiled code
source = cast(str, inspect.getsourcefile(item))
code = compile(tree, source, 'exec')
# Execute the code, resulting in a redefinition of item
exec(code, self.globals)
self.globals[DATA_TRACKER] = self.dependency_tracker
###Output
_____no_output_____
###Markdown
The `instrument()` method puts all these together, first parsing the item into a tree, then transforming and executing the tree.
###Code
class Slicer(Slicer):
def instrument(self, item: Any) -> None:
"""Instrument `item`, transforming its source code, and re-defining it."""
super().instrument(item)
tree = self.parse(item)
tree = self.transform(tree)
self.execute(tree, item)
###Output
_____no_output_____
###Markdown
When we restore the original definition (after the `with` block), we save the dependency tracker again.
###Code
class Slicer(Slicer):
def restore(self) -> None:
"""Restore original code."""
if DATA_TRACKER in self.globals:
self.saved_dependencies = self.globals[DATA_TRACKER]
del self.globals[DATA_TRACKER]
super().restore()
###Output
_____no_output_____
###Markdown
Three convenience functions allow us to see the dependencies as (well) dependencies, as code, and as graph. These simply invoke the respective functions on the saved dependencies.
###Code
class Slicer(Slicer):
def dependencies(self) -> Dependencies:
"""Return collected dependencies."""
if self.saved_dependencies is None:
return Dependencies({}, {})
return self.saved_dependencies.dependencies()
def code(self, *args: Any, **kwargs: Any) -> None:
"""Show code of instrumented items, annotated with dependencies."""
first = True
for item in self.items_to_instrument:
if not first:
print()
self.dependencies().code(item, *args, **kwargs) # type: ignore
first = False
def graph(self, *args: Any, **kwargs: Any) -> Digraph:
"""Show dependency graph."""
return self.dependencies().graph(*args, **kwargs) # type: ignore
def _repr_svg_(self) -> Any:
"""If the object is output in Jupyter, render dependencies as a SVG graph"""
return self.graph()._repr_svg_()
###Output
_____no_output_____
###Markdown
Let us put `Slicer` into action. We track our `middle()` function:
###Code
with Slicer(middle) as slicer:
m = middle(2, 1, 3)
m
###Output
_____no_output_____
###Markdown
These are the dependencies in string form (used when printed):
###Code
print(slicer.dependencies())
###Output
middle():
<test> (2) <= z (1), y (1)
<test> (3) <= x (1), y (1); <- <test> (2)
<test> (5) <= z (1), x (1); <- <test> (3)
<middle() return value> (6) <= y (1); <- <test> (5)
###Markdown
This is the code form:
###Code
slicer.code()
###Output
* 1 [34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z): [37m# type: ignore[39;49;00m
* 2 [34mif[39;49;00m y < z: [37m# <= z (1), y (1)[39;49;00m
* 3 [34mif[39;49;00m x < y: [37m# <= x (1), y (1); <- <test> (2)[39;49;00m
4 [34mreturn[39;49;00m y
* 5 [34melif[39;49;00m x < z: [37m# <= z (1), x (1); <- <test> (3)[39;49;00m
* 6 [34mreturn[39;49;00m y [37m# <= y (1); <- <test> (5)[39;49;00m
7 [34melse[39;49;00m:
8 [34mif[39;49;00m x > y:
9 [34mreturn[39;49;00m y
10 [34melif[39;49;00m x > z:
11 [34mreturn[39;49;00m x
12 [34mreturn[39;49;00m z
###Markdown
And this is the graph form:
###Code
slicer
###Output
_____no_output_____
###Markdown
You can also access the raw `repr()` form, which allows you to reconstruct dependencies at any time. (This is how we showed off dependencies at the beginning of this chapter, before even introducing the code that computes them.)
###Code
print(repr(slicer.dependencies()))
###Output
Dependencies(
data={
('x', (middle, 1)): set(),
('y', (middle, 1)): set(),
('z', (middle, 1)): set(),
('<test>', (middle, 2)): {('z', (middle, 1)), ('y', (middle, 1))},
('<test>', (middle, 3)): {('x', (middle, 1)), ('y', (middle, 1))},
('<test>', (middle, 5)): {('z', (middle, 1)), ('x', (middle, 1))},
('<middle() return value>', (middle, 6)): {('y', (middle, 1))}},
control={
('x', (middle, 1)): set(),
('y', (middle, 1)): set(),
('z', (middle, 1)): set(),
('<test>', (middle, 2)): set(),
('<test>', (middle, 3)): {('<test>', (middle, 2))},
('<test>', (middle, 5)): {('<test>', (middle, 3))},
('<middle() return value>', (middle, 6)): {('<test>', (middle, 5))}})
###Markdown
Diagnostics The `Slicer` constructor accepts a `log` argument (default: False), which can be set to show various intermediate results:* `log=True` (or `log=1`): Show instrumented source code* `log=2`: Also log execution* `log=3`: Also log individual transformer steps* `log=4`: Also log source line numbers More Examples Let us demonstrate our `Slicer` class on a few more examples. Square Root The `square_root()` function from [the chapter on assertions](Assertions.ipynb) demonstrates a nice interplay between data and control dependencies.
###Code
import math
from Assertions import square_root # minor dependency
###Output
_____no_output_____
###Markdown
Here is the original source code:
###Code
print_content(inspect.getsource(square_root), '.py')
###Output
[34mdef[39;49;00m [32msquare_root[39;49;00m(x): [37m# type: ignore[39;49;00m
[34massert[39;49;00m x >= [34m0[39;49;00m [37m# precondition[39;49;00m
approx = [34mNone[39;49;00m
guess = x / [34m2[39;49;00m
[34mwhile[39;49;00m approx != guess:
approx = guess
guess = (approx + x / approx) / [34m2[39;49;00m
[34massert[39;49;00m math.isclose(approx * approx, x)
[34mreturn[39;49;00m approx
###Markdown
Turning on logging shows the instrumented version:
###Code
with Slicer(square_root, log=True) as root_slicer:
y = square_root(2.0)
###Output
Instrumenting <function square_root at 0x7f943d683620>
[34mdef[39;49;00m [32msquare_root[39;49;00m(x):
_data.param([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x, pos=[34m1[39;49;00m, last=[34mTrue[39;49;00m)
[34massert[39;49;00m _data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) >= [34m0[39;49;00m
approx = _data.set([33m'[39;49;00m[33mapprox[39;49;00m[33m'[39;49;00m, [34mNone[39;49;00m)
guess = _data.set([33m'[39;49;00m[33mguess[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) / [34m2[39;49;00m)
[34mwhile[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mapprox[39;49;00m[33m'[39;49;00m, approx) != _data.get([33m'[39;49;00m[33mguess[39;49;00m[33m'[39;49;00m, guess)):
[34mwith[39;49;00m _data:
approx = _data.set([33m'[39;49;00m[33mapprox[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33mguess[39;49;00m[33m'[39;49;00m, guess))
guess = _data.set([33m'[39;49;00m[33mguess[39;49;00m[33m'[39;49;00m, (_data.get([33m'[39;49;00m[33mapprox[39;49;00m[33m'[39;49;00m, approx) + _data
.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) / _data.get([33m'[39;49;00m[33mapprox[39;49;00m[33m'[39;49;00m, approx)) / [34m2[39;49;00m)
[34massert[39;49;00m _data.ret(_data.call(_data.get([33m'[39;49;00m[33mmath[39;49;00m[33m'[39;49;00m, math).isclose)(_data.arg(
_data.get([33m'[39;49;00m[33mapprox[39;49;00m[33m'[39;49;00m, approx) * _data.get([33m'[39;49;00m[33mapprox[39;49;00m[33m'[39;49;00m, approx), pos=[34m1[39;49;00m),
_data.arg(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x), pos=[34m2[39;49;00m)))
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<square_root() return value>[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33mapprox[39;49;00m[33m'[39;49;00m,
approx))
###Markdown
The dependency graph shows how `guess` and `approx` flow into each other until they are the same.
###Code
root_slicer
###Output
_____no_output_____
###Markdown
Again, we can show the code annotated with dependencies:
###Code
root_slicer.code()
###Output
* 54 [34mdef[39;49;00m [32msquare_root[39;49;00m(x): [37m# type: ignore[39;49;00m
55 [34massert[39;49;00m x >= [34m0[39;49;00m [37m# precondition[39;49;00m
56
* 57 approx = [34mNone[39;49;00m
* 58 guess = x / [34m2[39;49;00m [37m# <= x (54)[39;49;00m
* 59 [34mwhile[39;49;00m approx != guess: [37m# <= guess (61), guess (58), approx (57), approx (60)[39;49;00m
* 60 approx = guess [37m# <= guess (61), guess (58); <- <test> (59)[39;49;00m
* 61 guess = (approx + x / approx) / [34m2[39;49;00m [37m# <= x (54), approx (60); <- <test> (59)[39;49;00m
62
63 [34massert[39;49;00m math.isclose(approx * approx, x)
* 64 [34mreturn[39;49;00m approx [37m# <= approx (60)[39;49;00m
###Markdown
The astute reader may find that an `assert p` statements do not control the following code, although it would be equivalent to `if not p: raise Exception`. Why is that?
###Code
quiz("Why don't `assert` statements induce control dependencies?",
[
"We have no special handling of `assert` statements",
"We have no special handling of `raise` statements",
"Assertions are not supposed to act as controlling mechanisms",
"All of the above",
], '(1 * 1 << 1 * 1 << 1 * 1)')
###Output
_____no_output_____
###Markdown
Indeed: we treat assertions as "neutral" in the sense that they do not affect the remainder of the program – if they are turned off, they have no effect; and if they are turned on, the remaining program logic should not depend on them. (Our instrumentation also has no special treatment of `assert`, `raise`, or even `return` statements; the latter two should be handled by our `with` blocks.)
###Code
# print(repr(root_slicer))
###Output
_____no_output_____
###Markdown
Removing HTML Markup Let us come to our ongoing example, `remove_html_markup()`. This is how its instrumented code looks like:
###Code
with Slicer(remove_html_markup) as rhm_slicer:
s = remove_html_markup("<foo>bar</foo>")
###Output
_____no_output_____
###Markdown
The graph is as discussed in the introduction to this chapter:
###Code
rhm_slicer
# print(repr(rhm_slicer.dependencies()))
rhm_slicer.code()
###Output
* 238 [34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s): [37m# type: ignore[39;49;00m
* 239 tag = [34mFalse[39;49;00m
* 240 quote = [34mFalse[39;49;00m
* 241 out = [33m"[39;49;00m[33m"[39;49;00m
242
* 243 [34mfor[39;49;00m c [35min[39;49;00m s: [37m# <= s (238)[39;49;00m
244 [34massert[39;49;00m tag [35mor[39;49;00m [35mnot[39;49;00m quote
245
* 246 [34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m [35mand[39;49;00m [35mnot[39;49;00m quote: [37m# <= c (243), quote (240)[39;49;00m
* 247 tag = [34mTrue[39;49;00m [37m# <- <test> (246)[39;49;00m
* 248 [34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m [35mnot[39;49;00m quote: [37m# <= c (243), quote (240); <- <test> (246)[39;49;00m
* 249 tag = [34mFalse[39;49;00m [37m# <- <test> (248)[39;49;00m
* 250 [34melif[39;49;00m (c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m) [35mand[39;49;00m tag: [37m# <= c (243); <- <test> (248)[39;49;00m
251 quote = [35mnot[39;49;00m quote
* 252 [34melif[39;49;00m [35mnot[39;49;00m tag: [37m# <= tag (249), tag (247); <- <test> (250)[39;49;00m
* 253 out = out + c [37m# <= c (243), out (241), out (253); <- <test> (252)[39;49;00m
254
* 255 [34mreturn[39;49;00m out [37m# <= out (253)[39;49;00m
###Markdown
We can also compute slices over the dependencies:
###Code
_, start_remove_html_markup = inspect.getsourcelines(remove_html_markup)
start_remove_html_markup
slicing_criterion = ('tag', (remove_html_markup,
start_remove_html_markup + 9))
tag_deps = rhm_slicer.dependencies().backward_slice(slicing_criterion) # type: ignore
tag_deps
# repr(tag_deps)
###Output
_____no_output_____
###Markdown
Calls and Augmented Assign Our last example covers augmented assigns and data flow across function calls. We introduce two simple functions `add_to()` and `mul_with()`:
###Code
def add_to(n, m): # type: ignore
n += m
return n
def mul_with(x, y): # type: ignore
x *= y
return x
###Output
_____no_output_____
###Markdown
And we put these two together in a single call:
###Code
def test_math() -> None:
return mul_with(1, add_to(2, 3))
with Slicer(add_to, mul_with, test_math) as math_slicer:
test_math()
###Output
_____no_output_____
###Markdown
The resulting dependence graph nicely captures the data flow between these calls, notably arguments and parameters:
###Code
math_slicer
###Output
_____no_output_____
###Markdown
These are also reflected in the code view:
###Code
math_slicer.code()
###Output
* 1 [34mdef[39;49;00m [32madd_to[39;49;00m(n, m): [37m# type: ignore[39;49;00m
* 2 n += m [37m# <= n (1), m (1)[39;49;00m
* 3 [34mreturn[39;49;00m n [37m# <= n (2)[39;49;00m
* 1 [34mdef[39;49;00m [32mmul_with[39;49;00m(x, y): [37m# type: ignore # <= <add_to() return value> (add_to:3)[39;49;00m
* 2 x *= y [37m# <= x (1), y (1)[39;49;00m
* 3 [34mreturn[39;49;00m x [37m# <= x (2)[39;49;00m
1 [34mdef[39;49;00m [32mtest_math[39;49;00m() -> [34mNone[39;49;00m:
* 2 [34mreturn[39;49;00m mul_with([34m1[39;49;00m, add_to([34m2[39;49;00m, [34m3[39;49;00m)) [37m# <= <mul_with() return value> (mul_with:3)[39;49;00m
###Markdown
More Applications \todo{Present some more applications}:* Learning across multiple (passing) runs* Detecting deviations* Statistical debugging with dependencies Things that do not Work Our slicer (and especially the underlying dependency tracker) is still a proof of concept. A number of Python features are not or only partially supported, and/or hardly tested:* __Exceptions__ are not handled. The code assumes that for every `call()`, there is a matching `ret()`; when exceptions break this, dependencies across function calls and arguments may be assigned incorrectly.* __Multiple definitions on a single line__ as in `x = y; x = 1` are not handled correctly. Our implementation assumes that there is one statement per line.* __If-Expressions__ (`y = 1 if x else 0`) do not create control dependencies, as there are no statements to control. Neither do `if` clauses in comprehensions.* __Asynchronous functions__ (`async`, `await`) are not tested.In these cases, the instrumentation and the underlying dependency tracker may fail to identify control and/or data flows. The semantics of the code, however, should always stay unchanged. Synopsis This chapter provides a `Slicer` class to automatically determine and visualize dynamic dependencies. When we say that a variable $x$ depends on a variable $y$ (written $x \leftarrow y$), we distinguish two kinds of dependencies:* **data dependencies**: $x$ obtains its value from a computation involving the value of $y$.* **control dependencies**: $x$ obtains its value because of a computation involving the value of $y$.Such dependencies are crucial for debugging, as they allow to determine the origins of individual values (and notably incorrect values). To determine dynamic dependencies in a function `func` and its callees `func1`, `func2`, etc., use```pythonwith Slicer(func, func1, func2) as slicer: ```and then `slicer.graph()` or `slicer.code()` to examine dependencies. Here is an example. The `demo()` function computes some number from `x`:
###Code
def demo(x: int) -> int:
z = x
while x <= z <= 64:
z *= 2
return z
###Output
_____no_output_____
###Markdown
By using `with Slicer(demo)`, we first instrument `demo()` and then execute it:
###Code
with Slicer(demo) as slicer:
demo(10)
###Output
_____no_output_____
###Markdown
After execution is complete, you can output `slicer` to visualize the dependencies as graph. Data dependencies are shown as black solid edges; control dependencies are shown as grey dashed edges. We see how the parameter `x` flows into `z`, which is returned after some computation that is control dependent on a `` involving `z`.
###Code
slicer
###Output
_____no_output_____
###Markdown
An alternate representation is `slicer.code()`, annotating the instrumented source code with (backward) dependencies. Data dependencies are shown with `<=`, control dependencies with `<-`; locations (lines) are shown in parentheses.
###Code
slicer.code()
###Output
* 1 [34mdef[39;49;00m [32mdemo[39;49;00m(x: [36mint[39;49;00m) -> [36mint[39;49;00m:
* 2 z = x [37m# <= x (1)[39;49;00m
* 3 [34mwhile[39;49;00m x <= z <= [34m64[39;49;00m: [37m# <= x (1), z (2), z (4)[39;49;00m
* 4 z *= [34m2[39;49;00m [37m# <= z (2), z (4); <- <test> (3)[39;49;00m
* 5 [34mreturn[39;49;00m z [37m# <= z (4)[39;49;00m
###Markdown
Dependencies can also be retrieved programmatically. The `dependencies()` method returns a `Dependencies` object encapsulating the dependency graph. The method `all_vars()` returns all variables in the dependency graph. Each variable is encoded as a pair (_name_, _location_) where _location_ is a pair (_codename_, _lineno_).
###Code
slicer.dependencies().all_vars()
###Output
_____no_output_____
###Markdown
`code()` and `graph()` methods can also be applied on dependencies. The method `backward_slice(var)` returns a backward slice for the given variable. To retrieve where `z` in Line 2 came from, use:
###Code
_, start_demo = inspect.getsourcelines(demo)
start_demo
slicer.dependencies().backward_slice(('z', (demo, start_demo + 1))).graph() # type: ignore
###Output
_____no_output_____
###Markdown
Here are the classes defined in this chapter. A `Slicer` instruments a program, using a `DependencyTracker` at run time to collect `Dependencies`.
###Code
# ignore
from ClassDiagram import display_class_hierarchy, class_tree
# ignore
assert class_tree(Slicer)[0][0] == Slicer
# ignore
display_class_hierarchy([Slicer, DependencyTracker, StackInspector, Dependencies],
abstract_classes=[
StackInspector,
Instrumenter
],
public_methods=[
StackInspector.caller_frame,
StackInspector.caller_function,
StackInspector.caller_globals,
StackInspector.caller_locals,
StackInspector.caller_location,
StackInspector.search_frame,
StackInspector.search_func,
Instrumenter.__init__,
Instrumenter.__enter__,
Instrumenter.__exit__,
Instrumenter.instrument,
Slicer.__init__,
Slicer.code,
Slicer.dependencies,
Slicer.graph,
Slicer._repr_svg_,
DataTracker.__init__,
DataTracker.__enter__,
DataTracker.__exit__,
DataTracker.arg,
DataTracker.augment,
DataTracker.call,
DataTracker.get,
DataTracker.param,
DataTracker.ret,
DataTracker.set,
DataTracker.test,
DataTracker.__repr__,
DependencyTracker.__init__,
DependencyTracker.__enter__,
DependencyTracker.__exit__,
DependencyTracker.arg,
# DependencyTracker.augment,
DependencyTracker.call,
DependencyTracker.get,
DependencyTracker.param,
DependencyTracker.ret,
DependencyTracker.set,
DependencyTracker.test,
DependencyTracker.__repr__,
Dependencies.__init__,
Dependencies.__repr__,
Dependencies.__str__,
Dependencies._repr_svg_,
Dependencies.code,
Dependencies.graph,
Dependencies.backward_slice,
Dependencies.all_functions,
Dependencies.all_vars,
],
project='debuggingbook')
###Output
_____no_output_____
###Markdown
Tracking Failure OriginsThe question of "Where does this value come from?" is fundamental for debugging. Which earlier variables could possibly have influenced the current erroneous state? And how did their values come to be?When programmers read code during debugging, they scan it for potential _origins_ of given values. This can be a tedious experience, notably, if the origins spread across multiple separate locations, possibly even in different modules. In this chapter, we thus investigate means to _determine such origins_ automatically – by collecting data and control dependencies during program execution.
###Code
from bookutils import YouTubeVideo
YouTubeVideo("sjf3cOR0lcI")
###Output
_____no_output_____
###Markdown
**Prerequisites*** You should have read the [Introduction to Debugging](Intro_Debugging).* To understand how to compute dependencies automatically (the second half of this chapter), you will need * advanced knowledge of Python semantics * knowledge on how to instrument and transform code * knowledge on how an interpreter works
###Code
import bookutils
from bookutils import quiz, next_inputs, print_content
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.Slicer import ```and then make use of the following features.This chapter provides a `Slicer` class to automatically determine and visualize dynamic dependencies. When we say that a variable $x$ depends on a variable $y$ (written $x \leftarrow y$), we distinguish two kinds of dependencies:* **data dependencies**: $x$ obtains its value from a computation involving the value of $y$.* **control dependencies**: $x$ obtains its value because of a computation involving the value of $y$.Such dependencies are crucial for debugging, as they allow to determine the origins of individual values (and notably incorrect values).To determine dynamic dependencies in a function `func` and its callees `func1`, `func2`, etc., use```pythonwith Slicer(func, func1, func2) as slicer: ```and then `slicer.graph()` or `slicer.code()` to examine dependencies.Here is an example. The `demo()` function computes some number from `x`:```python>>> def demo(x: int) -> int:>>> z = x>>> while x <= z <= 64:>>> z *= 2>>> return z```By using `with Slicer(demo)`, we first instrument `demo()` and then execute it:```python>>> with Slicer(demo) as slicer:>>> demo(10)```After execution is complete, you can output `slicer` to visualize the dependencies as graph. Data dependencies are shown as black solid edges; control dependencies are shown as grey dashed edges. We see how the parameter `x` flows into `z`, which is returned after some computation that is control dependent on a `` involving `z`.```python>>> slicer```An alternate representation is `slicer.code()`, annotating the instrumented source code with (backward) dependencies. Data dependencies are shown with `<=`, control dependencies with `<-`; locations (lines) are shown in parentheses.```python>>> slicer.code()* 1 def demo(x: int) -> int:* 2 z = x <= x (1)* 3 while x <= z <= 64: <= x (1), z (2), z (4)* 4 z *= 2 (3)* 5 return z <= z (4)```Dependencies can also be retrieved programmatically. The `dependencies()` method returns a `Dependencies` object encapsulating the dependency graph.The method `all_vars()` returns all variables in the dependency graph. Each variable is encoded as a pair (_name_, _location_) where _location_ is a pair (_codename_, _lineno_).```python>>> slicer.dependencies().all_vars(){('', ( int>, 5)), ('', ( int>, 3)), ('x', ( int>, 1)), ('z', ( int>, 2)), ('z', ( int>, 4))}````code()` and `graph()` methods can also be applied on dependencies. The method `backward_slice(var)` returns a backward slice for the given variable. To retrieve where `z` in Line 2 came from, use:```python>>> _, start_demo = inspect.getsourcelines(demo)>>> start_demo1>>> slicer.dependencies().backward_slice(('z', (demo, start_demo + 1))).graph() type: ignore```Here are the classes defined in this chapter. A `Slicer` instruments a program, using a `DependencyTracker` at run time to collect `Dependencies`.\todo{Use slices to enforce (lack of) specific information flows}\todo{Use slices in statistical debugging}
###Code
from typing import Set, List, Tuple, Any, Callable, Dict, Optional, Union, Type
from typing import Generator, Generator
import inspect
import warnings
###Output
_____no_output_____
###Markdown
DependenciesIn the [Introduction to debugging](Intro_Debugging.ipynb), we have seen how faults in a program state propagate to eventually become visible as failures. This induces a debugging strategy called _tracking origins_: 1. We start with a single faulty state _f_ – the failure2. We determine f's _origins_ – the parts of earlier states that could have caused the faulty state _f_3. For each of these origins _e_, we determine whether they are faulty or not4. For each of the faulty origins, we in turn determine _their_ origins.5. If we find a part of the state that is faulty, yet has only correct origins, we have found the defect. In all generality, a "part of the state" can be anything that can influence the program – some configuration setting, some database content, or the state of a device. Almost always, though, it is through _individual variables_ that a part of the state manifests itself.The good news is that variables do not take arbitrary values at arbitrary times – instead, they are set and accessed at precise moments in time, as determined by the program's semantics. This allows us to determine their _origins_ by reading program code. Let us assume you have a piece of code that reads as follows. The `middle()` function is supposed to return the "middle" number of three values `x`, `y`, and `z` – that is, the one number that neither is the minimum nor the maximum.
###Code
def middle(x, y, z): # type: ignore
if y < z:
if x < y:
return y
elif x < z:
return y
else:
if x > y:
return y
elif x > z:
return x
return z
###Output
_____no_output_____
###Markdown
In most cases, `middle()` runs just fine:
###Code
m = middle(1, 2, 3)
m
###Output
_____no_output_____
###Markdown
In others, however, it returns the wrong value:
###Code
m = middle(2, 1, 3)
m
###Output
_____no_output_____
###Markdown
This is a typical debugging situation: You see a value that is erroneous; and you want to find out where it came from. * In our case, we see that the erroneous value was returned from `middle()`, so we identify the five `return` statements in `middle()` that the value could have come from.* The value returned is the value of `y`, and neither `x`, `y`, nor `z` are altered during the execution of `middle()`. Hence, it must be one of the three `return y` statements that is the origin of `m`. But which one?For our small example, we can fire up an interactive debugger and simply step through the function; this reveals us the conditions evaluated and the `return` statement executed.
###Code
import Debugger # minor dependency
# ignore
next_inputs(["step", "step", "step", "step", "quit"]);
with Debugger.Debugger():
middle(2, 1, 3)
###Output
Calling middle(z = 3, y = 1, x = 2)
###Markdown
We now see that it was the second `return` statement that returned the incorrect value. But why was it executed after all? To this end, we can resort to the `middle()` source code and have a look at those conditions that caused the `return y` statement to be executed. Indeed, the conditions `y y`, and finally `x < z`again are _origins_ of the returned value – and in turn have `x`, `y`, and `z` as origins. In our above reasoning about origins, we have encountered two kinds of origins:* earlier _data values_ (such as the value of `y` being returned) and* earlier _control conditions_ (such as the `if` conditions governing the `return y` statement).The later parts of the state that can be influenced by such origins are said to be _dependent_ on these origins. Speaking of variables, a variable $x$ _depends_ on the value of a variable $y$ (written as $x \leftarrow y$) if a change in $y$ could affect the value of $x$. We distinguish two kinds of dependencies $x \leftarrow y$, aligned with the two kinds of origins as outlined above:* **Data dependency**: $x$ obtains its value from a computation involving the value of $y$. In our example, `m` is data dependent on the return value of `middle()`.* **Control dependency**: $x$ obtains its value because of a computation involving the value of $y$. In our example, the value returned by `return y` is control dependent on the several conditions along its path, which involve `x`, `y`, and `z`. Let us examine these dependencies in more detail. Excursion: Visualizing Dependencies Note: This is an excursion, diverting away from the main flow of the chapter. Unless you know what you are doing, you are encouraged to skip this part. To illustrate our examples, we introduce a `Dependencies` class that captures dependencies between variables at specific locations. A Class for Dependencies `Dependencies` holds two dependency graphs. `data` holds data dependencies, `control` holds control dependencies. Each of the two is organized as a dictionary holding _nodes_ as keys and sets of nodes as values. Each node comes as a tuple```python(variable_name, location) ``` where `variable_name` is a string and `location` is a pair```python(func, lineno) ``` denoting a unique location in the code. This is also reflected in the following type definitions:
###Code
Location = Tuple[Callable, int]
Node = Tuple[str, Location]
Dependency = Dict[Node, Set[Node]]
###Output
_____no_output_____
###Markdown
In this chapter, for many purposes, we need to lookup a function's location, source code, or simply definition. The class `StackInspector` provides a number of convenience functions for this purpose.
###Code
from StackInspector import StackInspector
###Output
_____no_output_____
###Markdown
The `Dependencies` class builds on `StackInspector` to capture dependencies.
###Code
class Dependencies(StackInspector):
"""A dependency graph"""
def __init__(self,
data: Optional[Dependency] = None,
control: Optional[Dependency] = None) -> None:
"""
Create a dependency graph from `data` and `control`.
Both `data` and `control` are dictionaries
holding _nodes_ as keys and sets of nodes as values.
Each node comes as a tuple (variable_name, location)
where `variable_name` is a string
and `location` is a pair (function, lineno)
where `function` is a callable and `lineno` is a line number
denoting a unique location in the code.
"""
if data is None:
data = {}
if control is None:
control = {}
self.data = data
self.control = control
for var in self.data:
self.control.setdefault(var, set())
for var in self.control:
self.data.setdefault(var, set())
self.validate()
###Output
_____no_output_____
###Markdown
The `validate()` method checks for consistency.
###Code
class Dependencies(Dependencies):
def validate(self) -> None:
"""Check dependency structure."""
assert isinstance(self.data, dict)
assert isinstance(self.control, dict)
for node in (self.data.keys()) | set(self.control.keys()):
var_name, location = node
assert isinstance(var_name, str)
func, lineno = location
assert callable(func)
assert isinstance(lineno, int)
###Output
_____no_output_____
###Markdown
The `source()` method returns the source code for a given node.
###Code
class Dependencies(Dependencies):
def _source(self, node: Node) -> str:
# Return source line, or ''
(name, location) = node
func, lineno = location
if not func:
# No source
return ''
try:
source_lines, first_lineno = inspect.getsourcelines(func)
except OSError:
warnings.warn(f"Couldn't find source "
f"for {func} ({func.__name__})")
return ''
try:
line = source_lines[lineno - first_lineno].strip()
except IndexError:
return ''
return line
def source(self, node: Node) -> str:
"""Return the source code for a given node."""
line = self._source(node)
if line:
return line
(name, location) = node
func, lineno = location
code_name = func.__name__
if code_name.startswith('<'):
return code_name
else:
return f'<{code_name}()>'
test_deps = Dependencies()
test_deps.source(('z', (middle, 1)))
###Output
_____no_output_____
###Markdown
Drawing Dependencies Both data and control form a graph between nodes, and cam be visualized as such. We use the `graphviz` package for creating such visualizations.
###Code
from graphviz import Digraph, nohtml
###Output
_____no_output_____
###Markdown
`make_graph()` sets the basic graph attributes.
###Code
import html
class Dependencies(Dependencies):
NODE_COLOR = 'peachpuff'
FONT_NAME = 'Fira Mono, Courier, monospace'
def make_graph(self, name: str = "dependencies", comment: str = "Dependencies") -> Digraph:
return Digraph(name=name, comment=comment,
graph_attr={
},
node_attr={
'style': 'filled',
'shape': 'box',
'fillcolor': self.NODE_COLOR,
'fontname': self.FONT_NAME
},
edge_attr={
'fontname': self.FONT_NAME
})
###Output
_____no_output_____
###Markdown
`graph()` returns a graph visualization.
###Code
class Dependencies(Dependencies):
def graph(self) -> Digraph:
"""Draw dependencies."""
self.validate()
g = self.make_graph()
self.draw_dependencies(g)
self.add_hierarchy(g)
return g
def _repr_svg_(self) -> Any:
"""If the object is output in Jupyter, render dependencies as a SVG graph"""
return self.graph()._repr_svg_()
###Output
_____no_output_____
###Markdown
The main part of graph drawing takes place in two methods, `draw_dependencies()` and `add_hierarchy()`. `draw_dependencies()` processes through the graph, adding nodes and edges from the dependencies.
###Code
class Dependencies(Dependencies):
def all_vars(self) -> Set[Node]:
"""Return a set of all variables (as `var_name`, `location`) in the dependencies"""
all_vars = set()
for var in self.data:
all_vars.add(var)
for source in self.data[var]:
all_vars.add(source)
for var in self.control:
all_vars.add(var)
for source in self.control[var]:
all_vars.add(source)
return all_vars
class Dependencies(Dependencies):
def draw_dependencies(self, g: Digraph) -> None:
for var in self.all_vars():
g.node(self.id(var),
label=self.label(var),
tooltip=self.tooltip(var))
if var in self.data:
for source in self.data[var]:
g.edge(self.id(source), self.id(var))
if var in self.control:
for source in self.control[var]:
g.edge(self.id(source), self.id(var),
style='dashed', color='grey')
###Output
_____no_output_____
###Markdown
`draw_dependencies()` makes use of a few helper functions.
###Code
class Dependencies(Dependencies):
def id(self, var: Node) -> str:
"""Return a unique ID for `var`."""
id = ""
# Avoid non-identifier characters
for c in repr(var):
if c.isalnum() or c == '_':
id += c
if c == ':' or c == ',':
id += '_'
return id
def label(self, var: Node) -> str:
"""Render node `var` using HTML style."""
(name, location) = var
source = self.source(var)
title = html.escape(name)
if name.startswith('<'):
title = f'<I>{title}</I>'
label = f'<B>{title}</B>'
if source:
label += (f'<FONT POINT-SIZE="9.0"><BR/><BR/>'
f'{html.escape(source)}'
f'</FONT>')
label = f'<{label}>'
return label
def tooltip(self, var: Node) -> str:
"""Return a tooltip for node `var`."""
(name, location) = var
func, lineno = location
return f"{func.__name__}:{lineno}"
###Output
_____no_output_____
###Markdown
In the second part of graph drawing, `add_hierarchy()` adds invisible edges to ensure that nodes with lower line numbers are drawn above nodes with higher line numbers.
###Code
class Dependencies(Dependencies):
def add_hierarchy(self, g: Digraph) -> Digraph:
"""Add invisible edges for a proper hierarchy."""
functions = self.all_functions()
for func in functions:
last_var = None
last_lineno = 0
for (lineno, var) in functions[func]:
if last_var is not None and lineno > last_lineno:
g.edge(self.id(last_var),
self.id(var),
style='invis')
last_var = var
last_lineno = lineno
return g
class Dependencies(Dependencies):
def all_functions(self) -> Dict[Callable, List[Tuple[int, Node]]]:
"""
Return mapping
{`function`: [(`lineno`, `var`), (`lineno`, `var`), ...], ...}
for all functions in the dependencies.
"""
functions: Dict[Callable, List[Tuple[int, Node]]] = {}
for var in self.all_vars():
(name, location) = var
func, lineno = location
if func not in functions:
functions[func] = []
functions[func].append((lineno, var))
for func in functions:
functions[func].sort()
return functions
###Output
_____no_output_____
###Markdown
Here comes the graph in all its glory:
###Code
def middle_deps() -> Dependencies:
return Dependencies({('z', (middle, 1)): set(), ('y', (middle, 1)): set(), ('x', (middle, 1)): set(), ('<test>', (middle, 2)): {('y', (middle, 1)), ('z', (middle, 1))}, ('<test>', (middle, 3)): {('y', (middle, 1)), ('x', (middle, 1))}, ('<test>', (middle, 5)): {('z', (middle, 1)), ('x', (middle, 1))}, ('<middle() return value>', (middle, 6)): {('y', (middle, 1))}}, {('z', (middle, 1)): set(), ('y', (middle, 1)): set(), ('x', (middle, 1)): set(), ('<test>', (middle, 2)): set(), ('<test>', (middle, 3)): {('<test>', (middle, 2))}, ('<test>', (middle, 5)): {('<test>', (middle, 3))}, ('<middle() return value>', (middle, 6)): {('<test>', (middle, 5))}})
middle_deps()
###Output
_____no_output_____
###Markdown
SlicesThe method `backward_slice(*critera, mode='cd')` returns a subset of dependencies, following dependencies backward from the given *slicing criteria* `criteria`. These criteria can be* variable names (such as ``); or* `(function, lineno)` pairs (such as `(middle, 3)`); or* `(var_name, (function, lineno))` (such as `(`x`, (middle, 1))`) locations.The extra parameter `mode` controls which dependencies are to be followed:* **`d`** = data dependencies* **`c`** = control dependencies
###Code
Criterion = Union[str, Location, Node]
class Dependencies(Dependencies):
def expand_criteria(self, criteria: List[Criterion]) -> List[Node]:
"""Return list of vars matched by `criteria`."""
all_vars = []
for criterion in criteria:
criterion_var = None
criterion_func = None
criterion_lineno = None
if isinstance(criterion, str):
criterion_var = criterion
elif len(criterion) == 2 and callable(criterion[0]):
criterion_func, criterion_lineno = criterion
elif len(criterion) == 2 and isinstance(criterion[0], str):
criterion_var = criterion[0]
criterion_func, criterion_lineno = criterion[1]
else:
raise ValueError("Invalid argument")
for var in self.all_vars():
(var_name, location) = var
func, lineno = location
name_matches = (criterion_func is None or
criterion_func == func or
criterion_func.__name__ == func.__name__)
location_matches = (criterion_lineno is None or
criterion_lineno == lineno)
var_matches = (criterion_var is None or
criterion_var == var_name)
if name_matches and location_matches and var_matches:
all_vars.append(var)
return all_vars
def backward_slice(self, *criteria: Criterion,
mode: str = 'cd', depth: int = -1) -> Dependencies:
"""
Create a backward slice from nodes `criteria`.
`mode` can contain 'c' (draw control dependencies)
and 'd' (draw data dependencies) (default: 'cd')
"""
data = {}
control = {}
queue = self.expand_criteria(criteria) # type: ignore
seen = set()
while len(queue) > 0 and depth != 0:
var = queue[0]
queue = queue[1:]
seen.add(var)
if 'd' in mode:
# Follow data dependencies
data[var] = self.data[var]
for next_var in data[var]:
if next_var not in seen:
queue.append(next_var)
else:
data[var] = set()
if 'c' in mode:
# Follow control dependencies
control[var] = self.control[var]
for next_var in control[var]:
if next_var not in seen:
queue.append(next_var)
else:
control[var] = set()
depth -= 1
return Dependencies(data, control)
###Output
_____no_output_____
###Markdown
End of Excursion Data DependenciesHere is an example of a data dependency in our `middle()` program. The value `y` returned by `middle()` comes from the value `y` as originally passed as argument. We use arrows $x \leftarrow y$ to indicate that a variable $x$ depends on an earlier variable $y$:
###Code
# ignore
middle_deps().backward_slice('<middle() return value>', mode='d') # type: ignore
###Output
_____no_output_____
###Markdown
Here, we can see that the value `y` in the return statement is data dependent on the value of `y` as passed to `middle()`. An alternate interpretation of this graph is a *data flow*: The value of `y` in the upper node _flows_ into the value of `y` in the lower node. Since we consider the values of variables at specific locations in the program, such data dependencies can also be interpreted as dependencies between _statements_ – the above `return` statement thus is data dependent on the initialization of `y` in the upper node. Control DependenciesHere is an example of a control dependency. The execution of the above `return` statement is controlled by the earlier test `x < z`. We use grey dashed lines to indicate control dependencies:
###Code
# ignore
middle_deps().backward_slice('<middle() return value>', mode='c', depth=1) # type: ignore
###Output
_____no_output_____
###Markdown
This test in turn is controlled by earlier tests, so the full chain of control dependencies looks like this:
###Code
# ignore
middle_deps().backward_slice('<middle() return value>', mode='c') # type: ignore
###Output
_____no_output_____
###Markdown
Dependency GraphsAs the above `` values (and their statements) are in turn also dependent on earlier data, namely the `x`, `y`, and `z` values as originally passed. We can draw all data and control dependencies in a single graph, called a _program dependency graph_:
###Code
# ignore
middle_deps()
###Output
_____no_output_____
###Markdown
This graph now gives us an idea on how to proceed to track the origins of the `middle()` return value at the bottom. Its value can come from any of the origins – namely the initialization of `y` at the function call, or from the `` that controls it. This test in turn depends on `x` and `z` and their associated statements, which we now can check one after the other. Note that all these dependencies in the graph are _dynamic_ dependencies – that is, they refer to statements actually evaluated in the run at hand, as well as the decisions made in that very run. There also are _static_ dependency graphs coming from static analysis of the code; but for debugging, _dynamic_ dependencies specific to the failing run are more useful. Showing Dependencies with CodeWhile a graph gives us a representation about which possible data and control flows to track, integrating dependencies with actual program code results in a compact representation that is easy to reason about. Excursion: Listing Dependencies To show dependencies as text, we introduce a method `format_var()` that shows a single node (a variable) as text. By default, a node is referenced as```pythonNAME (FUNCTION:LINENO)```However, within a given function, it makes no sense to re-state the function name again and again, so we have a shorthand```pythonNAME (LINENO)```to state a dependency to variable `NAME` in line `LINENO`.
###Code
class Dependencies(Dependencies):
def format_var(self, var: Node, current_func: Optional[Callable] = None) -> str:
"""Return string for `var` in `current_func`."""
name, location = var
func, lineno = location
if current_func and (func == current_func or func.__name__ == current_func.__name__):
return f"{name} ({lineno})"
else:
return f"{name} ({func.__name__}:{lineno})"
###Output
_____no_output_____
###Markdown
`format_var()` is used extensively in the `__str__()` string representation of dependencies, listing all nodes and their data (`<=`) and control (`<-`) dependencies.
###Code
class Dependencies(Dependencies):
def __str__(self) -> str:
"""Return string representation of dependencies"""
self.validate()
out = ""
for func in self.all_functions():
code_name = func.__name__
if out != "":
out += "\n"
out += f"{code_name}():\n"
all_vars = list(set(self.data.keys()) | set(self.control.keys()))
all_vars.sort(key=lambda var: var[1][1])
for var in all_vars:
(name, location) = var
var_func, var_lineno = location
var_code_name = var_func.__name__
if var_code_name != code_name:
continue
all_deps = ""
for (source, arrow) in [(self.data, "<="), (self.control, "<-")]:
deps = ""
for data_dep in source[var]:
if deps == "":
deps = f" {arrow} "
else:
deps += ", "
deps += self.format_var(data_dep, func)
if deps != "":
if all_deps != "":
all_deps += ";"
all_deps += deps
if all_deps == "":
continue
out += (" " +
self.format_var(var, func) +
all_deps + "\n")
return out
###Output
_____no_output_____
###Markdown
Here is a compact string representation of dependencies. We see how the (last) `middle() return value` has a data dependency to `y` in Line 1, and to the `` in Line 5.
###Code
print(middle_deps())
###Output
middle():
<test> (2) <= z (1), y (1)
<test> (3) <= x (1), y (1); <- <test> (2)
<test> (5) <= x (1), z (1); <- <test> (3)
<middle() return value> (6) <= y (1); <- <test> (5)
###Markdown
The `__repr__()` method shows a raw form of dependencies, useful for creating dependencies from scratch.
###Code
class Dependencies(Dependencies):
def repr_var(self, var: Node) -> str:
name, location = var
func, lineno = location
return f"({repr(name)}, ({func.__name__}, {lineno}))"
def repr_deps(self, var_set: Set[Node]) -> str:
if len(var_set) == 0:
return "set()"
return ("{" +
", ".join(f"{self.repr_var(var)}"
for var in var_set) +
"}")
def repr_dependencies(self, vars: Dependency) -> str:
return ("{\n " +
",\n ".join(
f"{self.repr_var(var)}: {self.repr_deps(vars[var])}"
for var in vars) +
"}")
def __repr__(self) -> str:
"""Represent dependencies as a Python expression"""
# Useful for saving and restoring values
return (f"Dependencies(\n" +
f" data={self.repr_dependencies(self.data)},\n" +
f" control={self.repr_dependencies(self.control)})")
print(repr(middle_deps()))
###Output
Dependencies(
data={
('z', (middle, 1)): set(),
('y', (middle, 1)): set(),
('x', (middle, 1)): set(),
('<test>', (middle, 2)): {('z', (middle, 1)), ('y', (middle, 1))},
('<test>', (middle, 3)): {('x', (middle, 1)), ('y', (middle, 1))},
('<test>', (middle, 5)): {('x', (middle, 1)), ('z', (middle, 1))},
('<middle() return value>', (middle, 6)): {('y', (middle, 1))}},
control={
('z', (middle, 1)): set(),
('y', (middle, 1)): set(),
('x', (middle, 1)): set(),
('<test>', (middle, 2)): set(),
('<test>', (middle, 3)): {('<test>', (middle, 2))},
('<test>', (middle, 5)): {('<test>', (middle, 3))},
('<middle() return value>', (middle, 6)): {('<test>', (middle, 5))}})
###Markdown
An even more useful representation comes when integrating these dependencies as comments into the code. The method `code(item_1, item_2, ...)` lists the given (function) items, including their dependencies; `code()` lists _all_ functions contained in the dependencies.
###Code
from typing import cast
class Dependencies(Dependencies):
def code(self, *items: Callable, mode: str = 'cd') -> None:
"""
List `items` on standard output, including dependencies as comments.
If `items` is empty, all included functions are listed.
`mode` can contain 'c' (draw control dependencies) and 'd' (draw data dependencies)
(default: 'cd').
"""
if len(items) == 0:
items = cast(Tuple[Callable], self.all_functions().keys())
for i, item in enumerate(items):
if i > 0:
print()
self._code(item, mode)
def _code(self, item: Callable, mode: str) -> None:
# The functions in dependencies may be (instrumented) copies
# of the original function. Find the function with the same name.
func = item
for fn in self.all_functions():
if fn == item or fn.__name__ == item.__name__:
func = fn
break
all_vars = self.all_vars()
slice_locations = set(location for (name, location) in all_vars)
source_lines, first_lineno = inspect.getsourcelines(func)
n = first_lineno
for line in source_lines:
line_location = (func, n)
if line_location in slice_locations:
prefix = "* "
else:
prefix = " "
print(f"{prefix}{n:4} ", end="")
comment = ""
for (mode_control, source, arrow) in [
('d', self.data, '<='),
('c', self.control, '<-')
]:
if mode_control not in mode:
continue
deps = ""
for var in source:
name, location = var
if location == line_location:
for dep_var in source[var]:
if deps == "":
deps = arrow + " "
else:
deps += ", "
deps += self.format_var(dep_var, item)
if deps != "":
if comment != "":
comment += "; "
comment += deps
if comment != "":
line = line.rstrip() + " # " + comment
print_content(line.rstrip(), '.py')
print()
n += 1
###Output
_____no_output_____
###Markdown
End of Excursion The following listing shows such an integration. For each executed line (`*`), we see its data (`<=`) and control (`<-`) dependencies, listing the associated variables and line numbers. The comment```python (5)```for Line 6, for instance, states that the return value is data dependent on the value of `y` in Line 1, and control dependent on the test in Line 5.Again, one can easily follow these dependencies back to track where a value came from (data dependencies) and why a statement was executed (control dependency).
###Code
# ignore
middle_deps().code() # type: ignore
###Output
* 1 [34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z): [37m# type: ignore[39;49;00m
* 2 [34mif[39;49;00m y < z: [37m# <= z (1), y (1)[39;49;00m
* 3 [34mif[39;49;00m x < y: [37m# <= x (1), y (1); <- <test> (2)[39;49;00m
4 [34mreturn[39;49;00m y
* 5 [34melif[39;49;00m x < z: [37m# <= x (1), z (1); <- <test> (3)[39;49;00m
* 6 [34mreturn[39;49;00m y [37m# <= y (1); <- <test> (5)[39;49;00m
7 [34melse[39;49;00m:
8 [34mif[39;49;00m x > y:
9 [34mreturn[39;49;00m y
10 [34melif[39;49;00m x > z:
11 [34mreturn[39;49;00m x
12 [34mreturn[39;49;00m z
###Markdown
One important aspect of dependencies is that they not only point to specific sources and causes of failures – but that they also _rule out_ parts of program and state as failures.* In the above code, Lines 8 and later have no influence on the output, simply because they were not executed.* Furthermore, we see that we can start our investigation with Line 6, because that is the last one executed.* The data dependencies tell us that no statement has interfered with the value of `y` between the function call and its return.* Hence, the error must be in the conditions and the final `return` statement.With this in mind, recall that our original invocation was `middle(2, 1, 3)`. Why and how is the above code wrong?
###Code
quiz("Which of the following `middle()` code lines should be fixed?",
[
"Line 2: `if y < z:`",
"Line 3: `if x < y:`",
"Line 5: `elif x < z:`",
"Line 6: `return z`",
], '(1 ** 0 + 1 ** 1) ** (1 ** 2 + 1 ** 3)')
###Output
_____no_output_____
###Markdown
Indeed, from the controlling conditions, we see that `y = y`, and `x < z` all hold. Hence, `y <= x < z` holds, and it is `x`, not `y`, that should be returned. SlicesGiven a dependency graph for a particular variable, we can identify the subset of the program that could have influenced it – the so-called _slice_. In the above code listing, these code locations are highlighted with `*` characters. Only these locations are part of the slice. Slices are central to debugging for two reasons:* First, they _rule out_ those locations of the program that could _not_ have an effect on the failure. Hence, these locations need not be investigated as it comes to searching for the defect. Nor do they need to be considered for a fix, as any change outside of the program slice by construction cannot affect the failure.* Second, they bring together possible origins that may be scattered across the code. Many dependencies in program code are _non-local_, with references to functions, classes, and modules defined in other locations, files, or libraries. A slice brings together all those locations in a single whole. Here is an example of a slice – this time for our well-known `remove_html_markup()` function from [the introduction to debugging](Intro_Debugging.ipynb):
###Code
from Intro_Debugging import remove_html_markup
print_content(inspect.getsource(remove_html_markup), '.py')
###Output
[34mdef[39;49;00m [32mremove_html_markup[39;49;00m(s): [37m# type: ignore[39;49;00m
tag = [34mFalse[39;49;00m
quote = [34mFalse[39;49;00m
out = [33m"[39;49;00m[33m"[39;49;00m
[34mfor[39;49;00m c [35min[39;49;00m s:
[34massert[39;49;00m tag [35mor[39;49;00m [35mnot[39;49;00m quote
[34mif[39;49;00m c == [33m'[39;49;00m[33m<[39;49;00m[33m'[39;49;00m [35mand[39;49;00m [35mnot[39;49;00m quote:
tag = [34mTrue[39;49;00m
[34melif[39;49;00m c == [33m'[39;49;00m[33m>[39;49;00m[33m'[39;49;00m [35mand[39;49;00m [35mnot[39;49;00m quote:
tag = [34mFalse[39;49;00m
[34melif[39;49;00m (c == [33m'[39;49;00m[33m"[39;49;00m[33m'[39;49;00m [35mor[39;49;00m c == [33m"[39;49;00m[33m'[39;49;00m[33m"[39;49;00m) [35mand[39;49;00m tag:
quote = [35mnot[39;49;00m quote
[34melif[39;49;00m [35mnot[39;49;00m tag:
out = out + c
[34mreturn[39;49;00m out
###Markdown
When we invoke `remove_html_markup()` as follows...
###Code
remove_html_markup('<foo>bar</foo>')
###Output
_____no_output_____
###Markdown
... we obtain the following dependencies:
###Code
# ignore
def remove_html_markup_deps() -> Dependencies:
return Dependencies({('s', (remove_html_markup, 136)): set(), ('tag', (remove_html_markup, 137)): set(), ('quote', (remove_html_markup, 138)): set(), ('out', (remove_html_markup, 139)): set(), ('c', (remove_html_markup, 141)): {('s', (remove_html_markup, 136))}, ('<test>', (remove_html_markup, 144)): {('quote', (remove_html_markup, 138)), ('c', (remove_html_markup, 141))}, ('tag', (remove_html_markup, 145)): set(), ('<test>', (remove_html_markup, 146)): {('quote', (remove_html_markup, 138)), ('c', (remove_html_markup, 141))}, ('<test>', (remove_html_markup, 148)): {('c', (remove_html_markup, 141))}, ('<test>', (remove_html_markup, 150)): {('tag', (remove_html_markup, 147)), ('tag', (remove_html_markup, 145))}, ('tag', (remove_html_markup, 147)): set(), ('out', (remove_html_markup, 151)): {('out', (remove_html_markup, 151)), ('c', (remove_html_markup, 141)), ('out', (remove_html_markup, 139))}, ('<remove_html_markup() return value>', (remove_html_markup, 153)): {('<test>', (remove_html_markup, 146)), ('out', (remove_html_markup, 151))}}, {('s', (remove_html_markup, 136)): set(), ('tag', (remove_html_markup, 137)): set(), ('quote', (remove_html_markup, 138)): set(), ('out', (remove_html_markup, 139)): set(), ('c', (remove_html_markup, 141)): set(), ('<test>', (remove_html_markup, 144)): set(), ('tag', (remove_html_markup, 145)): {('<test>', (remove_html_markup, 144))}, ('<test>', (remove_html_markup, 146)): {('<test>', (remove_html_markup, 144))}, ('<test>', (remove_html_markup, 148)): {('<test>', (remove_html_markup, 146))}, ('<test>', (remove_html_markup, 150)): {('<test>', (remove_html_markup, 148))}, ('tag', (remove_html_markup, 147)): {('<test>', (remove_html_markup, 146))}, ('out', (remove_html_markup, 151)): {('<test>', (remove_html_markup, 150))}, ('<remove_html_markup() return value>', (remove_html_markup, 153)): set()})
# ignore
remove_html_markup_deps().graph()
###Output
_____no_output_____
###Markdown
Again, we can read such a graph _forward_ (starting from, say, `s`) or _backward_ (starting from the return value). Starting forward, we see how the passed string `s` flows into the `for` loop, breaking `s` into individual characters `c` that are then checked on various occasions, before flowing into the `out` return value. We also see how the various `if` conditions are all influenced by `c`, `tag`, and `quote`.
###Code
quiz("Why does the first line `tag = False` not influence anything?",
[
"Because the input contains only tags",
"Because `tag` is set to True with the first character",
"Because `tag` is not read by any variable",
"Because the input contains no tags",
], '(1 << 1 + 1 >> 1)')
###Output
_____no_output_____
###Markdown
Which are the locations that set `tag` to True? To this end, we compute the slice of `tag` at `tag = True`:
###Code
# ignore
tag_deps = Dependencies({('tag', (remove_html_markup, 145)): set(), ('<test>', (remove_html_markup, 144)): {('quote', (remove_html_markup, 138)), ('c', (remove_html_markup, 141))}, ('quote', (remove_html_markup, 138)): set(), ('c', (remove_html_markup, 141)): {('s', (remove_html_markup, 136))}, ('s', (remove_html_markup, 136)): set()}, {('tag', (remove_html_markup, 145)): {('<test>', (remove_html_markup, 144))}, ('<test>', (remove_html_markup, 144)): set(), ('quote', (remove_html_markup, 138)): set(), ('c', (remove_html_markup, 141)): set(), ('s', (remove_html_markup, 136)): set()})
tag_deps
###Output
_____no_output_____
###Markdown
We see where the value of `tag` comes from: from the characters `c` in `s` as well as `quote`, which all cause it to be set. Again, we can combine these dependencies and the listing in a single, compact view. Note, again, that there are no other locations in the code that could possibly have affected `tag` in our run.
###Code
# ignore
tag_deps.code()
quiz("How does the slice of `tag = True` change "
"for a different value of `s`?",
[
"Not at all",
"If `s` contains a quote, the `quote` slice is included, too",
"If `s` contains no HTML tag, the slice will be empty"
], '[1, 2, 3][1:]')
###Output
_____no_output_____
###Markdown
Indeed, our dynamic slices reflect dependencies as they occurred within a single execution. As the execution changes, so do the dependencies. Tracking TechniquesFor the remainder of this chapter, let us investigate means to _determine such dependencies_ automatically – by _collecting_ them during program execution. The idea is that with a single Python call, we can collect the dependencies for some computation, and present them to programmers – as graphs or as code annotations, as shown above. To track dependencies, for every variable, we need to keep track of its _origins_ – where it obtained its value, and which tests controlled its assignments. There are two ways to do so:* Wrapping Data Objects* Wrapping Data Accesses Wrapping Data Objects One way to track origins is to _wrap_ each value in a class that stores both a value and the origin of the value. If a variable `x` is initialized to zero in Line 3, for instance, we could store it as```x = (value=0, origin=)```and if it is copied in, say, Line 5 to another variable `y`, we could store this as```y = (value=0, origin=)```Such a scheme would allow us to track origins and dependencies right within the variable. In a language like Python, it is actually possibly to subclass from basic types. Here's how we create a `MyInt` subclass of `int`:
###Code
class MyInt(int):
def __new__(cls: Type, value: Any, *args: Any, **kwargs: Any) -> Any:
return super(cls, cls).__new__(cls, value)
def __repr__(self) -> str:
return f"{int(self)}"
n: MyInt = MyInt(5)
###Output
_____no_output_____
###Markdown
We can access `n` just like any integer:
###Code
n, n + 1
###Output
_____no_output_____
###Markdown
However, we can also add extra attributes to it:
###Code
n.origin = "Line 5" # type: ignore
n.origin # type: ignore
###Output
_____no_output_____
###Markdown
Such a "wrapping" scheme has the advantage of _leaving program code untouched_ – simply pass "wrapped" objects instead of the original values. However, it also has a number of drawbacks.* First, we must make sure that the "wrapper" objects are still compatible with the original values – notably by converting them back whenever needed. (What happens if an internal Python function expects an `int` and gets a `MyInt` instead?)* Second, we have to make sure that origins do not get lost during computations – which involves overloading operators such as `+`, `-`, `*`, and so on. (Right now, `MyInt(1) + 1` gives us an `int` object, not a `MyInt`.)* Third, we have to do this for _all_ data types of a language, which is pretty tedious.* Fourth and last, however, we want to track whenever a value is assigned to another variable. Python has no support for this, and thus our dependencies will necessarily be incomplete. Wrapping Data Accesses An alternate way of tracking origins is to _instrument_ the source code such that all _data read and write operations are tracked_. That is, the original data stays unchanged, but we change the code instead.In essence, for every occurrence of a variable `x` being _read_, we replace it with```python_data.get('x', x) returns x```and for every occurrence of a value being _written_ to `x`, we replace the value with```python_data.set('x', value) returns value```and let the `_data` object track these reads and writes.Hence, an assignment such as ```pythona = b + c```would get rewritten to```pythona = _data.set('a', _data.get('b', b) + _data.get('c', c))```and with every access to `_data`, we would track 1. the current _location_ in the code, and 2. whether the respective variable was read or written.For the above statement, we could deduce that `b` and `c` were read, and `a` was written – which makes `a` data dependent on `b` and `c`. The advantage of such instrumentation is that it works with _arbitrary objects_ (in Python, that is) – we do not case whether `a`, `b`, and `c` would be integers, floats, strings, lists. or any other type for which `+` would be defined. Also, the code semantics remain entirely unchanged.The disadvantage, however, is that it takes a bit of effort to exactly separate reads and writes into individual groups, and that a number of language features have to be handled separately. This is what we do in the remainder of this chapter. A Data TrackerTo implement `_data` accesses as shown above, we introduce the `DataTracker` class. As its name suggests, it keeps track of variables being read and written, and provides methods to determine the code location where this tool place.
###Code
class DataTracker(StackInspector):
"""Track data accesses during execution"""
def __init__(self, log: bool = False) -> None:
"""Constructor. If `log` is set, turn on logging."""
self.log = log
###Output
_____no_output_____
###Markdown
`set()` is invoked when a variable is set, as in```pythonpi = _data.set('pi', 3.1415)```By default, we simply log the access using name and value. (`loads` will be used later.)
###Code
class DataTracker(DataTracker):
def set(self, name: str, value: Any, loads: Optional[Set[str]] = None) -> Any:
"""Track setting `name` to `value`."""
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: setting {name}")
return value
###Output
_____no_output_____
###Markdown
`get()` is invoked when a variable is retrieved, as in```pythonprint(_data.get('pi', pi))```By default, we simply log the access.
###Code
class DataTracker(DataTracker):
def get(self, name: str, value: Any) -> Any:
"""Track getting `value` from `name`."""
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: getting {name}")
return value
###Output
_____no_output_____
###Markdown
Here's an example of a logging `DataTracker`:
###Code
_test_data = DataTracker(log=True)
x = _test_data.set('x', 1)
_test_data.get('x', x)
###Output
<module>:1: getting x
###Markdown
Instrumenting Source CodeHow do we transform source code such that read and write accesses to variables would be automatically rewritten? To this end, we inspect the internal representation of source code, namely the _abstract syntax trees_ (ASTs). An AST represents the code as a tree, with specific node types for each syntactical element.
###Code
import ast
import astor
from bookutils import show_ast
###Output
_____no_output_____
###Markdown
Here is the tree representation for our `middle()` function. It starts with a `FunctionDef` node at the top (with the name `"middle"` and the three arguments `x`, `y`, `z` as children), followed by a subtree for each of the `If` statements, each of which contains a branch for when their condition evaluates to`True` and a branch for when their condition evaluates to `False`.
###Code
middle_tree = ast.parse(inspect.getsource(middle))
show_ast(middle_tree)
###Output
_____no_output_____
###Markdown
At the very bottom of the tree, you can see a number of `Name` nodes, referring individual variables. These are the ones we want to transform. Tracking Variable AccessOur goal is to _traverse_ the tree, identify all `Name` nodes, and convert them to respective `_data` accesses.To this end, we manipulate the AST through the Python modules `ast` and `astor`. The [official Python `ast` reference](http://docs.python.org/3/library/ast) is complete, but a bit brief; the documentation ["Green Tree Snakes - the missing Python AST docs"](https://greentreesnakes.readthedocs.io/en/latest/) provides an excellent introduction. The Python `ast` class provides a class `NodeTransformer` that allows such transformations. Subclassing from it, we provide a method `visit_Name()` that will be invoked for all `Name` nodes – and replace it by a new subtree from `make_get_data()`:
###Code
from ast import NodeTransformer, NodeVisitor, Name, AST
DATA_TRACKER = '_data'
class TrackGetTransformer(NodeTransformer):
def visit_Name(self, node: Name) -> AST:
self.generic_visit(node)
if node.id in dir(__builtins__):
# Do not change built-in names
return node
if node.id == DATA_TRACKER:
# Do not change own accesses
return node
if not isinstance(node.ctx, Load):
# Only change loads (not stores, not deletions)
return node
new_node = make_get_data(node.id)
ast.copy_location(new_node, node)
return new_node
###Output
_____no_output_____
###Markdown
Our function `make_get_data(id, method)` returns a new subtree equivalent to the Python code `_data.method('id', id)`.
###Code
from ast import Module, Load, Store, \
Attribute, With, withitem, keyword, Call, Expr, Assign, AugAssign
# Starting with Python 3.8, these will become Constant.
# from ast import Num, Str, NameConstant
# Use `ast.Num`, `ast.Str`, and `ast.NameConstant` for compatibility
def make_get_data(id: str, method: str = 'get') -> Call:
return Call(func=Attribute(value=Name(id=DATA_TRACKER, ctx=Load()),
attr=method, ctx=Load()),
args=[ast.Str(s=id), Name(id=id, ctx=Load())],
keywords=[])
###Output
_____no_output_____
###Markdown
This is the tree that `make_get_data()` produces:
###Code
show_ast(Module(body=[make_get_data("x")]))
###Output
_____no_output_____
###Markdown
How do we know that this is a correct subtree? We can carefully read the [official Python `ast` reference](http://docs.python.org/3/library/ast) and then proceed by trial and error (and apply [delta debugging](DeltaDebugger.ipynb) to determine error causes). Or – pro tip! – we can simply take a piece of Python code, parse it and use `ast.dump()` to print out how to construct the resulting AST:
###Code
print(ast.dump(ast.parse("_data.get('x', x)")))
###Output
Module(body=[Expr(value=Call(func=Attribute(value=Name(id='_data', ctx=Load()), attr='get', ctx=Load()), args=[Str(s='x'), Name(id='x', ctx=Load())], keywords=[]))])
###Markdown
If you compare the above output with the code of `make_get_data()`, above, you will find out where the source of `make_get_data()` comes from. Let us put `TrackGetTransformer` to action. Its `visit()` method calls `visit_Name()`, which then in turn transforms the `Name` nodes as we want it. This happens in place.
###Code
TrackGetTransformer().visit(middle_tree);
###Output
_____no_output_____
###Markdown
To see the effect of our transformations, we introduce a method `dump_tree()` which outputs the tree – and also compiles it to check for any inconsistencies.
###Code
def dump_tree(tree: AST) -> None:
print_content(astor.to_source(tree), '.py')
ast.fix_missing_locations(tree) # Must run this before compiling
_ = compile(tree, '<dump_tree>', 'exec')
###Output
_____no_output_____
###Markdown
We see that our transformer has properly replaced all
###Code
dump_tree(middle_tree)
###Output
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
[34mif[39;49;00m _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y) < _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z):
[34mif[39;49;00m _data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) < _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y):
[34mreturn[39;49;00m _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y)
[34melif[39;49;00m _data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) < _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z):
[34mreturn[39;49;00m _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y)
[34melif[39;49;00m _data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) > _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y):
[34mreturn[39;49;00m _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y)
[34melif[39;49;00m _data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) > _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z):
[34mreturn[39;49;00m _data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x)
[34mreturn[39;49;00m _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)
###Markdown
Let us now execute this code together with the `DataTracker()` class we previously introduced. The class `DataTrackerTester()` takes a (transformed) tree and a function. Using it as```pythonwith DataTrackerTester(tree, func): func(...)```first executes the code in _tree_ (possibly instrumenting `func`) and then the `with` body. At the end, `func` is restored to its previous (non-instrumented) version.
###Code
from types import TracebackType
class DataTrackerTester:
def __init__(self, tree: AST, func: Callable, log: bool = True) -> None:
"""Constructor. Execute the code in `tree` while instrumenting `func`."""
# We pass the source file of `func` such that we can retrieve it
# when accessing the location of the new compiled code
source = cast(str, inspect.getsourcefile(func))
self.code = compile(tree, source, 'exec')
self.func = func
self.log = log
def make_data_tracker(self) -> Any:
return DataTracker(log=self.log)
def __enter__(self) -> Any:
"""Rewrite function"""
tracker = self.make_data_tracker()
globals()[DATA_TRACKER] = tracker
exec(self.code, globals())
return tracker
def __exit__(self, exc_type: Type, exc_value: BaseException,
traceback: TracebackType) -> Optional[bool]:
"""Restore function"""
globals()[self.func.__name__] = self.func
del globals()[DATA_TRACKER]
return None
###Output
_____no_output_____
###Markdown
Here is our `middle()` function:
###Code
print_content(inspect.getsource(middle), '.py', start_line_number=1)
###Output
1 [34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z): [37m# type: ignore[39;49;00m
2 [34mif[39;49;00m y < z:
3 [34mif[39;49;00m x < y:
4 [34mreturn[39;49;00m y
5 [34melif[39;49;00m x < z:
6 [34mreturn[39;49;00m y
7 [34melse[39;49;00m:
8 [34mif[39;49;00m x > y:
9 [34mreturn[39;49;00m y
10 [34melif[39;49;00m x > z:
11 [34mreturn[39;49;00m x
12 [34mreturn[39;49;00m z
###Markdown
And here is our instrumented `middle_tree` executed with a `DataTracker` object. We see how the `middle()` tests access one argument after another.
###Code
with DataTrackerTester(middle_tree, middle):
middle(2, 1, 3)
###Output
middle:2: getting y
middle:2: getting z
middle:3: getting x
middle:3: getting y
middle:5: getting x
middle:5: getting z
middle:6: getting y
###Markdown
After `DataTrackerTester` is done, `middle` is reverted to its non-instrumented version:
###Code
middle(2, 1, 3)
###Output
_____no_output_____
###Markdown
For a complete picture of what happens during executions, we implement a number of additional code transformers. For each assignment statement `x = y`, we change it to `x = _data.set('x', y)`. This allows to __track assignments__. Excursion: Tracking Assignments For the remaining transformers, we follow the same steps as for `TrackGetTransformer`, except that our `visit_...()` methods focus on different nodes, and return different subtrees. Here, we focus on assignment nodes. We want to transform assignments `x = value` into `_data.set('x', value)` to track assignments to `x`. If the left hand side of the assignment is more complex, as in `x[y] = value`, we want to ensure the read access to `x` and `y` is also tracked. By transforming `x[y] = value` into `_data.set('x', value, loads=(x, y))`, we ensure that `x` and `y` are marked as read (as the otherwise ignored `loads` argument would be changed to `_data.get()` calls for `x` and `y`). Using `ast.dump()`, we reveal what the corresponding syntax tree has to look like:
###Code
print(ast.dump(ast.parse("_data.set('x', value, loads=(a, b))")))
###Output
Module(body=[Expr(value=Call(func=Attribute(value=Name(id='_data', ctx=Load()), attr='set', ctx=Load()), args=[Str(s='x'), Name(id='value', ctx=Load())], keywords=[keyword(arg='loads', value=Tuple(elts=[Name(id='a', ctx=Load()), Name(id='b', ctx=Load())], ctx=Load()))]))])
###Markdown
Using this structure, we can write a function `make_set_data()` which constructs such a subtree.
###Code
def make_set_data(id: str, value: Any,
loads: Optional[Set[str]] = None, method: str = 'set') -> Call:
"""
Construct a subtree _data.`method`('`id`', `value`).
If `loads` is set to [X1, X2, ...], make it
_data.`method`('`id`', `value`, loads=(X1, X2, ...))
"""
keywords=[]
if loads:
keywords = [
keyword(arg='loads',
value=ast.Tuple(
elts=[Name(id=load, ctx=Load()) for load in loads],
ctx=Load()
))
]
new_node = Call(func=Attribute(value=Name(id=DATA_TRACKER, ctx=Load()),
attr=method, ctx=Load()),
args=[ast.Str(s=id), value],
keywords=keywords)
ast.copy_location(new_node, value)
return new_node
###Output
_____no_output_____
###Markdown
The problem is, however: How do we get the name of the variable being assigned to? The left hand side of an assignment can be a complex expression such as `x[i]`. We use the leftmost name of the left hand side as name to be assigned to.
###Code
class LeftmostNameVisitor(NodeVisitor):
def __init__(self) -> None:
super().__init__()
self.leftmost_name: Optional[str] = None
def visit_Name(self, node: Name) -> None:
if self.leftmost_name is None:
self.leftmost_name = node.id
self.generic_visit(node)
def leftmost_name(tree: AST) -> Optional[str]:
visitor = LeftmostNameVisitor()
visitor.visit(tree)
return visitor.leftmost_name
leftmost_name(ast.parse('a[x] = 25'))
###Output
_____no_output_____
###Markdown
Python also allows _tuple assignments_, as in `(a, b, c) = (1, 2, 3)`. We extract all variables being stored (that is, expressions whose `ctx` attribute is `Store()`) and extract their (leftmost) names.
###Code
class StoreVisitor(NodeVisitor):
def __init__(self) -> None:
super().__init__()
self.names: Set[str] = set()
def visit(self, node: AST) -> None:
if hasattr(node, 'ctx') and isinstance(node.ctx, Store): # type: ignore
name = leftmost_name(node)
if name:
self.names.add(name)
self.generic_visit(node)
def store_names(tree: AST) -> Set[str]:
visitor = StoreVisitor()
visitor.visit(tree)
return visitor.names
store_names(ast.parse('a[x], b[y], c = 1, 2, 3'))
###Output
_____no_output_____
###Markdown
For complex assignments, we also want to access the names read in the left hand side of an expression.
###Code
class LoadVisitor(NodeVisitor):
def __init__(self) -> None:
super().__init__()
self.names: Set[str] = set()
def visit(self, node: AST) -> None:
if hasattr(node, 'ctx') and isinstance(node.ctx, Load): # type: ignore
name = leftmost_name(node)
if name is not None:
self.names.add(name)
self.generic_visit(node)
def load_names(tree: AST) -> Set[str]:
visitor = LoadVisitor()
visitor.visit(tree)
return visitor.names
load_names(ast.parse('a[x], b[y], c = 1, 2, 3'))
###Output
_____no_output_____
###Markdown
With this, we can now define `TrackSetTransformer` as a transformer for regular assignments. Note that in Python, an assignment can have multiple targets, as in `a = b = c`; we assign the data dependencies of `c` to them all.
###Code
class TrackSetTransformer(NodeTransformer):
def visit_Assign(self, node: Assign) -> Assign:
value = astor.to_source(node.value)
if value.startswith(DATA_TRACKER + '.set'):
return node # Do not apply twice
for target in node.targets:
loads = load_names(target)
for store_name in store_names(target):
node.value = make_set_data(store_name, node.value,
loads=loads)
loads = set()
return node
###Output
_____no_output_____
###Markdown
The special form of "augmented assign" needs special treatment. We change statements of the form `x += y` to `x += _data.augment('x', y)`.
###Code
class TrackSetTransformer(TrackSetTransformer):
def visit_AugAssign(self, node: AugAssign) -> AugAssign:
value = astor.to_source(node.value)
if value.startswith(DATA_TRACKER):
return node # Do not apply twice
id = cast(str, leftmost_name(node.target))
node.value = make_set_data(id, node.value, method='augment')
return node
###Output
_____no_output_____
###Markdown
The corresponding `augment()` method uses a combination of `set()` and `get()` to reflect the semantics.
###Code
class DataTracker(DataTracker):
def augment(self, name: str, value: Any) -> Any:
"""Track augmenting `name` with `value`.
To be overloaded in subclasses."""
self.set(name, self.get(name, value))
return value
###Output
_____no_output_____
###Markdown
Here's both of these transformers in action. Our original function has a number of assignments:
###Code
def assign_test(x): # type: ignore
fourty_two = forty_two = 42
a, b, c = 1, 2, 3
c[d[x]].attr = 47
foo *= bar + 1
assign_tree = ast.parse(inspect.getsource(assign_test))
TrackSetTransformer().visit(assign_tree)
dump_tree(assign_tree)
###Output
[34mdef[39;49;00m [32massign_test[39;49;00m(x):
fourty_two = forty_two = _data.set([33m'[39;49;00m[33mforty_two[39;49;00m[33m'[39;49;00m, _data.set([33m'[39;49;00m[33mfourty_two[39;49;00m[33m'[39;49;00m, [34m42[39;49;00m)
)
a, b, c = _data.set([33m'[39;49;00m[33ma[39;49;00m[33m'[39;49;00m, _data.set([33m'[39;49;00m[33mc[39;49;00m[33m'[39;49;00m, _data.set([33m'[39;49;00m[33mb[39;49;00m[33m'[39;49;00m, ([34m1[39;49;00m, [34m2[39;49;00m, [34m3[39;49;00m))))
c[d[x]].attr = _data.set([33m'[39;49;00m[33mc[39;49;00m[33m'[39;49;00m, [34m47[39;49;00m, loads=(d, x, c))
foo *= _data.augment([33m'[39;49;00m[33mfoo[39;49;00m[33m'[39;49;00m, bar + [34m1[39;49;00m)
###Markdown
If we later apply our transformer for data accesses, we can see that we track all variable reads and writes.
###Code
TrackGetTransformer().visit(assign_tree)
dump_tree(assign_tree)
###Output
[34mdef[39;49;00m [32massign_test[39;49;00m(x):
fourty_two = forty_two = _data.set([33m'[39;49;00m[33mforty_two[39;49;00m[33m'[39;49;00m, _data.set([33m'[39;49;00m[33mfourty_two[39;49;00m[33m'[39;49;00m, [34m42[39;49;00m)
)
a, b, c = _data.set([33m'[39;49;00m[33ma[39;49;00m[33m'[39;49;00m, _data.set([33m'[39;49;00m[33mc[39;49;00m[33m'[39;49;00m, _data.set([33m'[39;49;00m[33mb[39;49;00m[33m'[39;49;00m, ([34m1[39;49;00m, [34m2[39;49;00m, [34m3[39;49;00m))))
_data.get([33m'[39;49;00m[33mc[39;49;00m[33m'[39;49;00m, c)[_data.get([33m'[39;49;00m[33md[39;49;00m[33m'[39;49;00m, d)[_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x)]].attr = _data.set(
[33m'[39;49;00m[33mc[39;49;00m[33m'[39;49;00m, [34m47[39;49;00m, loads=(_data.get([33m'[39;49;00m[33md[39;49;00m[33m'[39;49;00m, d), _data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x), _data.get([33m'[39;49;00m[33mc[39;49;00m[33m'[39;49;00m,
c)))
foo *= _data.augment([33m'[39;49;00m[33mfoo[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33mbar[39;49;00m[33m'[39;49;00m, bar) + [34m1[39;49;00m)
###Markdown
End of Excursion Each return statement `return x` is transformed to `return _data.set('', x)`. This allows to __track return values__. Excursion: Tracking Return Values Our `TrackReturnTransformer` also makes use of `make_set_data()`.
###Code
class TrackReturnTransformer(NodeTransformer):
def __init__(self) -> None:
self.function_name: Optional[str] = None
super().__init__()
def visit_FunctionDef(self, node: Union[ast.FunctionDef, ast.AsyncFunctionDef]) -> AST:
outer_name = self.function_name
self.function_name = node.name # Save current name
self.generic_visit(node)
self.function_name = outer_name
return node
def visit_AsyncFunctionDef(self, node: ast.AsyncFunctionDef) -> AST:
return self.visit_FunctionDef(node)
def return_value(self, tp: str = "return") -> str:
if self.function_name is None:
return f"<{tp} value>"
else:
return f"<{self.function_name}() {tp} value>"
def visit_return_or_yield(self, node: Union[ast.Return, ast.Yield, ast.YieldFrom],
tp: str = "return") -> AST:
if node.value is not None:
value = astor.to_source(node.value)
if not value.startswith(DATA_TRACKER + '.set'):
node.value = make_set_data(self.return_value(tp), node.value)
return node
def visit_Return(self, node: ast.Return) -> AST:
return self.visit_return_or_yield(node, tp="return")
def visit_Yield(self, node: ast.Yield) -> AST:
return self.visit_return_or_yield(node, tp="yield")
def visit_YieldFrom(self, node: ast.YieldFrom) -> AST:
return self.visit_return_or_yield(node, tp="yield")
###Output
_____no_output_____
###Markdown
This is the effect of `TrackReturnTransformer`. We see that all return values are saved, and thus all locations of the corresponding return statements are tracked.
###Code
TrackReturnTransformer().visit(middle_tree)
dump_tree(middle_tree)
with DataTrackerTester(middle_tree, middle):
middle(2, 1, 3)
###Output
middle:2: getting y
middle:2: getting z
middle:3: getting x
middle:3: getting y
middle:5: getting x
middle:5: getting z
middle:6: getting y
middle:6: setting <middle() return value>
###Markdown
End of Excursion To track __control dependencies__, for every block controlled by an `if`, `while`, or `for`:1. We wrap their tests in a `_data.test()` wrapper. This allows us to assign pseudo-variables like `` which hold the conditions.2. We wrap their controlled blocks in a `with` statement. This allows us to track the variables read right before the `with` (= the controlling variables), and to restore the current controlling variables when the block is left.A statement```pythonif cond: body```thus becomes```pythonif _data.test(cond): with _data: body``` Excursion: Tracking Control To modify control statements, we traverse the tree, looking for `If` nodes:
###Code
class TrackControlTransformer(NodeTransformer):
def visit_If(self, node: ast.If) -> ast.If:
self.generic_visit(node)
node.test = self.make_test(node.test)
node.body = self.make_with(node.body)
node.orelse = self.make_with(node.orelse)
return node
###Output
_____no_output_____
###Markdown
The subtrees come from helper functions `make_with()` and `make_test()`. Again, all these subtrees are obtained via `ast.dump()`.
###Code
class TrackControlTransformer(TrackControlTransformer):
def make_with(self, block: List[ast.stmt]) -> List[ast.stmt]:
"""Create a subtree 'with _data: `block`'"""
if len(block) == 0:
return []
block_as_text = astor.to_source(block[0])
if block_as_text.startswith('with ' + DATA_TRACKER):
return block # Do not apply twice
new_node = With(
items=[
withitem(
context_expr=Name(id=DATA_TRACKER, ctx=Load()),
optional_vars=None)
],
body=block
)
ast.copy_location(new_node, block[0])
return [new_node]
class TrackControlTransformer(TrackControlTransformer):
def make_test(self, test: ast.expr) -> ast.expr:
test_as_text = astor.to_source(test)
if test_as_text.startswith(DATA_TRACKER + '.test'):
return test # Do not apply twice
new_test = Call(func=Attribute(value=Name(id=DATA_TRACKER, ctx=Load()),
attr='test',
ctx=Load()),
args=[test],
keywords=[])
ast.copy_location(new_test, test)
return new_test
###Output
_____no_output_____
###Markdown
`while` loops are handled just like `if` constructs.
###Code
class TrackControlTransformer(TrackControlTransformer):
def visit_While(self, node: ast.While) -> ast.While:
self.generic_visit(node)
node.test = self.make_test(node.test)
node.body = self.make_with(node.body)
node.orelse = self.make_with(node.orelse)
return node
###Output
_____no_output_____
###Markdown
`for` loops gets a different treatment, as there is no condition that would control the body. Still, we ensure that setting the iterator variable is properly tracked.
###Code
class TrackControlTransformer(TrackControlTransformer):
# regular `for` loop
def visit_For(self, node: Union[ast.For, ast.AsyncFor]) -> AST:
self.generic_visit(node)
id = astor.to_source(node.target).strip()
node.iter = make_set_data(id, node.iter)
# Uncomment if you want iterators to control their bodies
# node.body = self.make_with(node.body)
# node.orelse = self.make_with(node.orelse)
return node
# `for` loops in async functions
def visit_AsyncFor(self, node: ast.AsyncFor) -> AST:
return self.visit_For(node)
# `for` clause in comprehensions
def visit_comprehension(self, node: ast.comprehension) -> AST:
self.generic_visit(node)
id = astor.to_source(node.target).strip()
node.iter = make_set_data(id, node.iter)
return node
###Output
_____no_output_____
###Markdown
Here is the effect of `TrackControlTransformer`:
###Code
TrackControlTransformer().visit(middle_tree)
dump_tree(middle_tree)
###Output
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y) < _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)):
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) < _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m, _data.get(
[33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y))
[34melse[39;49;00m:
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) < _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m,
_data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y))
[34melse[39;49;00m:
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) > _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m, _data.get(
[33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y))
[34melse[39;49;00m:
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) > _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m,
_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x))
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z))
###Markdown
We extend `DataTracker` to also log these events:
###Code
class DataTracker(DataTracker):
def test(self, cond: AST) -> AST:
"""Test condition `cond`. To be overloaded in subclasses."""
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: testing condition")
return cond
class DataTracker(DataTracker):
def __enter__(self) -> Any:
"""Enter `with` block. To be overloaded in subclasses."""
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: entering block")
return self
def __exit__(self, exc_type: Type, exc_value: BaseException,
traceback: TracebackType) -> Optional[bool]:
"""Exit `with` block. To be overloaded in subclasses."""
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: exiting block")
return None
with DataTrackerTester(middle_tree, middle):
middle(2, 1, 3)
###Output
middle:2: getting y
middle:2: getting z
middle:2: testing condition
middle:3: entering block
middle:3: getting x
middle:3: getting y
middle:3: testing condition
middle:5: entering block
middle:5: getting x
middle:5: getting z
middle:5: testing condition
middle:6: entering block
middle:6: getting y
middle:6: setting <middle() return value>
middle:6: exiting block
middle:6: exiting block
middle:6: exiting block
###Markdown
End of Excursion We also want to be able to __track calls__ across multiple functions. To this end, we wrap each call```pythonfunc(arg1, arg2, ...)```into```python_data.ret(_data.call(func)(_data.arg(arg1), _data.arg(arg2), ...))```each of which simply pass through their given argument, but which allow to track the beginning of calls (`call()`), the computation of arguments (`arg()`), and the return of the call (`ret()`), respectively. Excursion: Tracking Calls and Arguments Our `TrackCallTransformer` visits all `Call` nodes, applying the transformations as shown above.
###Code
class TrackCallTransformer(NodeTransformer):
def make_call(self, node: AST, func: str,
pos: Optional[int] = None, kw: Optional[str] = None) -> Call:
"""Return _data.call(`func`)(`node`)"""
keywords = []
# `Num()` and `Str()` are deprecated in favor of `Constant()`
if pos:
keywords.append(keyword(arg='pos', value=ast.Num(pos)))
if kw:
keywords.append(keyword(arg='kw', value=ast.Str(kw)))
return Call(func=Attribute(value=Name(id=DATA_TRACKER,
ctx=Load()),
attr=func,
ctx=Load()),
args=[node],
keywords=keywords)
def visit_Call(self, node: Call) -> Call:
self.generic_visit(node)
call_as_text = astor.to_source(node)
if call_as_text.startswith(DATA_TRACKER + '.ret'):
return node # Already applied
func_as_text = astor.to_source(node)
if func_as_text.startswith(DATA_TRACKER + '.'):
return node # Own function
new_args = []
for n, arg in enumerate(node.args):
new_args.append(self.make_call(arg, 'arg', pos=n + 1))
node.args = cast(List[ast.expr], new_args)
for kw in node.keywords:
id = kw.arg if hasattr(kw, 'arg') else None
kw.value = self.make_call(kw.value, 'arg', kw=id)
node.func = self.make_call(node.func, 'call')
return self.make_call(node, 'ret')
###Output
_____no_output_____
###Markdown
Our example function `middle()` does not contain any calls, but here is a function that invokes `middle()` twice:
###Code
def test_call() -> int:
x = middle(1, 2, z=middle(1, 2, 3))
return x
call_tree = ast.parse(inspect.getsource(test_call))
dump_tree(call_tree)
###Output
[34mdef[39;49;00m [32mtest_call[39;49;00m() ->[36mint[39;49;00m:
x = middle([34m1[39;49;00m, [34m2[39;49;00m, z=middle([34m1[39;49;00m, [34m2[39;49;00m, [34m3[39;49;00m))
[34mreturn[39;49;00m x
###Markdown
If we invoke `TrackCallTransformer` on this testing function, we get the following transformed code:
###Code
TrackCallTransformer().visit(call_tree);
dump_tree(call_tree)
def f() -> bool:
return math.isclose(1, 1.0)
f_tree = ast.parse(inspect.getsource(f))
dump_tree(f_tree)
TrackCallTransformer().visit(f_tree);
dump_tree(f_tree)
###Output
[34mdef[39;49;00m [32mf[39;49;00m() ->[36mbool[39;49;00m:
[34mreturn[39;49;00m _data.ret(_data.call(math.isclose)(_data.arg([34m1[39;49;00m, pos=[34m1[39;49;00m), _data.
arg([34m1.0[39;49;00m, pos=[34m2[39;49;00m)))
###Markdown
As before, our default `arg()`, `ret()`, and `call()` methods simply log the event and pass through the given value.
###Code
class DataTracker(DataTracker):
def arg(self, value: Any, pos: Optional[int] = None, kw: Optional[str] = None) -> Any:
"""
Track `value` being passed as argument.
`pos` (if given) is the argument position (starting with 1).
`kw` (if given) is the argument keyword.
"""
if self.log:
caller_func, lineno = self.caller_location()
info = ""
if pos:
info += f" #{pos}"
if kw:
info += f" {repr(kw)}"
print(f"{caller_func.__name__}:{lineno}: pushing arg{info}")
return value
class DataTracker(DataTracker):
def ret(self, value: Any) -> Any:
"""Track `value` being used as return value."""
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: returned from call")
return value
class DataTracker(DataTracker):
def call(self, func: Callable) -> Callable:
"""Track a call to `func`."""
if self.log:
caller_func, lineno = self.caller_location()
print(f"{caller_func.__name__}:{lineno}: calling {func}")
return func
dump_tree(call_tree)
with DataTrackerTester(call_tree, test_call):
test_call()
test_call()
###Output
_____no_output_____
###Markdown
End of Excursion On the receiving end, for each function argument `x`, we insert a call `_data.param('x', x, [position info])` to initialize `x`. This is useful for __tracking parameters across function calls.__ Excursion: Tracking Parameters Again, we use `ast.dump()` to determine the correct syntax tree:
###Code
print(ast.dump(ast.parse("_data.param('x', x, pos=1, last=True)")))
class TrackParamsTransformer(NodeTransformer):
def visit_FunctionDef(self, node: ast.FunctionDef) -> ast.FunctionDef:
self.generic_visit(node)
named_args = []
for child in ast.iter_child_nodes(node.args):
if isinstance(child, ast.arg):
named_args.append(child)
create_stmts = []
for n, child in enumerate(named_args):
keywords=[keyword(arg='pos', value=ast.Num(n=n + 1))]
if child is node.args.vararg:
keywords.append(keyword(arg='vararg', value=ast.Str(s='*')))
if child is node.args.kwarg:
keywords.append(keyword(arg='vararg', value=ast.Str(s='**')))
if n == len(named_args) - 1:
keywords.append(keyword(arg='last',
value=ast.NameConstant(value=True)))
create_stmt = Expr(
value=Call(
func=Attribute(value=Name(id=DATA_TRACKER, ctx=Load()),
attr='param', ctx=Load()),
args=[ast.Str(s=child.arg),
Name(id=child.arg, ctx=Load())
],
keywords=keywords
)
)
ast.copy_location(create_stmt, node)
create_stmts.append(create_stmt)
node.body = cast(List[ast.stmt], create_stmts) + node.body
return node
###Output
_____no_output_____
###Markdown
This is the effect of `TrackParamsTransformer()`. You see how the first three parameters are all initialized.
###Code
TrackParamsTransformer().visit(middle_tree)
dump_tree(middle_tree)
###Output
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
_data.param([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x, pos=[34m1[39;49;00m)
_data.param([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y, pos=[34m2[39;49;00m)
_data.param([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z, pos=[34m3[39;49;00m, last=[34mTrue[39;49;00m)
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y) < _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)):
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) < _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m, _data.get(
[33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y))
[34melse[39;49;00m:
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) < _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m,
_data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y))
[34melse[39;49;00m:
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) > _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m, _data.get(
[33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y))
[34melse[39;49;00m:
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) > _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m,
_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x))
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z))
###Markdown
By default, the `DataTracker` `param()` method simply calls `set()` to set variables.
###Code
class DataTracker(DataTracker):
def param(self, name: str, value: Any,
pos: Optional[int] = None, vararg: str = '', last: bool = False) -> Any:
"""
At the beginning of a function, track parameter `name` being set to `value`.
`pos` is the position of the argument (starting with 1).
`vararg` is "*" if `name` is a vararg parameter (as in *args),
and "**" is `name` is a kwargs parameter (as in *kwargs).
`last` is True if `name` is the last parameter.
"""
if self.log:
caller_func, lineno = self.caller_location()
info = ""
if pos is not None:
info += f" #{pos}"
print(f"{caller_func.__name__}:{lineno}: initializing {vararg}{name}{info}")
return self.set(name, value)
with DataTrackerTester(middle_tree, middle):
middle(2, 1, 3)
def args_test(x, *args, **kwargs): # type: ignore
print(x, *args, **kwargs)
args_tree = ast.parse(inspect.getsource(args_test))
TrackParamsTransformer().visit(args_tree)
dump_tree(args_tree)
with DataTrackerTester(args_tree, args_test):
args_test(1, 2, 3)
###Output
args_test:1: initializing x #1
args_test:1: setting x
args_test:1: initializing *args #2
args_test:1: setting args
args_test:1: initializing **kwargs #3
args_test:1: setting kwargs
1 2 3
###Markdown
End of Excursion What do we obtain after we have applied all these transformers on `middle()`? We see that the code now contains quite a load of instrumentation.
###Code
dump_tree(middle_tree)
###Output
[34mdef[39;49;00m [32mmiddle[39;49;00m(x, y, z):
_data.param([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x, pos=[34m1[39;49;00m)
_data.param([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y, pos=[34m2[39;49;00m)
_data.param([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z, pos=[34m3[39;49;00m, last=[34mTrue[39;49;00m)
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y) < _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)):
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) < _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m, _data.get(
[33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y))
[34melse[39;49;00m:
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) < _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m,
_data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y))
[34melse[39;49;00m:
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) > _data.get([33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m, _data.get(
[33m'[39;49;00m[33my[39;49;00m[33m'[39;49;00m, y))
[34melse[39;49;00m:
[34mwith[39;49;00m _data:
[34mif[39;49;00m _data.test(_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x) > _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z)):
[34mwith[39;49;00m _data:
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m,
_data.get([33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m, x))
[34mreturn[39;49;00m _data.set([33m'[39;49;00m[33m<middle() return value>[39;49;00m[33m'[39;49;00m, _data.get([33m'[39;49;00m[33mz[39;49;00m[33m'[39;49;00m, z))
###Markdown
And when we execute this code, we see that we can track quite a number of events, while the code semantics stay unchanged.
###Code
with DataTrackerTester(middle_tree, middle):
m = middle(2, 1, 3)
m
###Output
middle:1: initializing x #1
middle:1: setting x
middle:1: initializing y #2
middle:1: setting y
middle:1: initializing z #3
middle:1: setting z
middle:2: getting y
middle:2: getting z
middle:2: testing condition
middle:3: entering block
middle:3: getting x
middle:3: getting y
middle:3: testing condition
middle:5: entering block
middle:5: getting x
middle:5: getting z
middle:5: testing condition
middle:6: entering block
middle:6: getting y
middle:6: setting <middle() return value>
middle:6: exiting block
middle:6: exiting block
middle:6: exiting block
|
tabular data/classification/Benchmarks/4. credit card/credit_1.ipynb | ###Markdown
Data Preparation
###Code
"Loading and preparing data"
datasets_path = os.path.join(os.path.dirname(os.path.dirname(os.getcwd())), 'Datasets\\')
url = datasets_path + 'data_credit_card.csv'
df = pd.read_csv(url)
df = df.drop(['Unnamed: 0'],axis =1)
df = df.rename(columns={'default payment next month': 'Credible'})
df.head()
a = df['PAY_2'].values
df['PAY_2'].unique()
steps = (pd.cut(a,11, retbins=True,include_lowest=True))[1][1:-1]
steps = np.unique(np.trunc(steps))
steps
" Handling data "
df['EDUCATION'] = df['EDUCATION'].replace({0:4, 5:4, 6:4})
df['MARRIAGE'] = df['MARRIAGE'].replace({0:3})
df['SEX'] = df['SEX'] - 1
df['EDUCATION'] = df['EDUCATION'] - 1
df['MARRIAGE'] = df['MARRIAGE'] - 1
" Decode Categorical Features "
sex_mapper = {0 : 'M', 1 : 'F'}
sex_mapper_inv = dict(map(reversed, sex_mapper.items()))
df['SEX'] = df['SEX'].replace(sex_mapper)
education_mapper = {0 : 'gradution_school', 1 : 'university', 2 : 'high_school', 3 : 'others'}
education_mapper_inv = dict(map(reversed, education_mapper.items()))
df['EDUCATION'] = df['EDUCATION'].replace(education_mapper)
marital_mapper = {0 : 'married' , 1 : 'single', 2 : 'others' }
marital_mapper_inv = dict(map(reversed, marital_mapper.items()))
df['MARRIAGE'] = df['MARRIAGE'].replace(marital_mapper)
df.head()
" display the features types "
df.dtypes
" Checking missing values "
df.replace('?', np.nan, inplace=True)
missing_values_table(df)
" separate the data and the target "
data_df = df.drop(columns=['Credible'])
target_df = df['Credible']
" calculate the categorical features mask "
categorical_feature_mask = (data_df.dtypes == object)
categorical_feature_mask
categorical_cols_names = data_df.columns[categorical_feature_mask].tolist()
categorical_cols_names
numerical_cols_names = data_df.columns[~categorical_feature_mask].tolist()
numerical_cols_names
" if no values missed we execute this code : "
data_df = pd.concat([data_df[numerical_cols_names].astype(float), data_df[categorical_cols_names]],axis = 1)
data_df.head()
" Encoding categorical features"
data_df['SEX'] = data_df['SEX'].replace(sex_mapper_inv)
data_df['EDUCATION'] = data_df['EDUCATION'].replace(education_mapper_inv)
data_df['MARRIAGE'] = data_df['MARRIAGE'].replace(marital_mapper_inv)
data_df.head()
data_target_df = pd.concat([data_df, target_df], axis=1)
" generate the Test SET "
nb_test_instances = 1000
test_df = data_target_df.sample(n=nb_test_instances)
data_test_df = test_df.drop(columns=['Credible'])
target_test_df = test_df['Credible']
" generate the Training SET "
train_df = pd.concat([data_target_df,test_df]).drop_duplicates(keep=False)
data_train_df = train_df.drop(columns=['Credible'])
target_train_df = train_df['Credible']
" Extract values of the test set to generate the neighbors"
data_test = data_test_df.values
target_test = target_test_df.values
numerical_cols = np.arange(0,len(numerical_cols_names))
categorical_cols = np.arange(len(numerical_cols_names),data_df.shape[1])
###Output
_____no_output_____
###Markdown
Neighbors Generation
###Code
nb_neighbors = 50
list_neigh = generate_all_neighbors(data_test,numerical_cols,categorical_cols,nb_neighbors)
" store all the neighbors together "
n = np.size(data_test,0)
all_neighbors = list_neigh[0]
for i in range(1,n) :
all_neighbors = np.concatenate((all_neighbors, list_neigh[i]), axis=0)
###Output
_____no_output_____
###Markdown
One hot encoding
###Code
df_neigh = pd.DataFrame(data = all_neighbors,columns= numerical_cols_names + categorical_cols_names)
df_neigh[categorical_cols_names] = df_neigh[categorical_cols_names].astype(int,errors='ignore')
" Decode all the data neighbors to perform one hot encoding "
df_neigh['SEX'] = df_neigh['SEX'].replace(sex_mapper)
df_neigh['EDUCATION'] = df_neigh['EDUCATION'].replace(education_mapper)
df_neigh['MARRIAGE'] = df_neigh['MARRIAGE'].replace(marital_mapper)
df_neigh.head()
" One hot encoding "
df_neigh = pd.get_dummies(df_neigh, prefix_sep='_', drop_first=True)
df_neigh
" Scale the neighbors data "
data_neigh = df_neigh.values
scaler_neigh = StandardScaler()
data_neigh_s = scaler_neigh.fit_transform(data_neigh)
" Store the neighbors in a list "
n = np.size(data_test,0)
list_neigh = []
j = 0
for i in range(0,n):
list_neigh.append(data_neigh_s[j:(j+nb_neighbors),:])
j += nb_neighbors
###Output
_____no_output_____
###Markdown
One hot encoding for the training and the test sets
###Code
data_train_df['SEX'] = data_train_df['SEX'].replace(sex_mapper)
data_train_df['EDUCATION'] = data_train_df['EDUCATION'].replace(education_mapper)
data_train_df['MARRIAGE'] = data_train_df['MARRIAGE'].replace(marital_mapper)
data_train_df = pd.get_dummies(data_train_df, prefix_sep='_', drop_first=True)
data_train_df.head()
data_train = data_train_df.values
target_train = target_train_df.values
data_test_df['SEX'] = data_test_df['SEX'].replace(sex_mapper)
data_test_df['EDUCATION'] = data_test_df['EDUCATION'].replace(education_mapper)
data_test_df['MARRIAGE'] = data_test_df['MARRIAGE'].replace(marital_mapper)
data_test_df = pd.get_dummies(data_test_df, prefix_sep='_', drop_first=True)
data_test_df.head()
data_test = data_test_df.values
target_test = target_test_df.values
" Scale the training and the test sets data"
scaler_train = StandardScaler()
data_train_s = scaler_train.fit_transform(data_train)
scaler_test = StandardScaler()
data_test_s = scaler_test.fit_transform(data_test)
" Define the functions to save and load data "
import pickle
def save_obj(obj, name):
with open(name + '.pkl', 'wb') as f:
pickle.dump(obj, f, pickle.HIGHEST_PROTOCOL)
def load_obj(name):
with open(name + '.pkl', 'rb') as f:
return pickle.load(f)
'SAVE THE DATA'
path = './saved_data/'
save_obj(data_train_s, path + 'data_train_s')
save_obj(target_train, path + 'target_train')
save_obj(data_test, path + 'data_test')
save_obj(data_test_s, path + 'data_test_s')
save_obj(target_test, path + 'target_test')
save_obj(list_neigh, path + 'list_neighbors')
###Output
_____no_output_____
###Markdown
Training the models
###Code
" Logistic Regression : "
lr = LogisticRegression(class_weight = "balanced",random_state=0,max_iter = 1000)
model_lr = lr.fit(data_train_s,target_train)
target_pred_lr = model_lr.predict(data_test_s)
" Random Forest : "
rdclassifier = RandomForestClassifier(class_weight = "balanced",n_estimators=100,max_depth=5, random_state=0)
model_rd = rdclassifier.fit(data_train_s,target_train)
target_pred_rd = model_rd.predict(data_test_s)
" SVM : "
clf = svm.SVC(class_weight = "balanced",probability=True)
model_svm = clf.fit(data_train_s, target_train)
target_pred_svm = model_svm.predict(data_test_s)
" Sklearn MLP Classifier : "
mlp = MLPClassifier(hidden_layer_sizes=(50,30), max_iter=1000,
solver='adam', random_state=1,
learning_rate_init=.1)
model_nt = mlp.fit(data_train_s, target_train)
target_pred_mlp = model_nt.predict(data_test_s)
###Output
_____no_output_____
###Markdown
Scores of the black box models
###Code
print(f"{'The score of the logistic regression model is ' :<50}{': {}'.format(round(model_lr.score(data_test_s,target_test),4))}")
print(f"{'The score of the Random Forest model is ' :<50}{': {}'.format(round(model_rd.score(data_test_s,target_test),4))}")
print(f"{'The score of the SVM model is ' :<50}{': {}'.format(round(model_svm.score(data_test_s,target_test),4))}")
print(f"{'The score of the Multi-Layer-Perceptron model is ' :<50}{': {}'.format(round(model_nt.score(data_test_s,target_test),4))}")
###Output
The score of the logistic regression model is : 0.66
The score of the Random Forest model is : 0.809
The score of the SVM model is : 0.794
The score of the Multi-Layer-Perceptron model is : 0.83
###Markdown
Execution of Split Based Selection Form Algorithm :
###Code
split_point = len(numerical_cols)
nb_models = 100
(L_Subgroups,P) = SplitBasedSelectionForm (data_test_s, target_test, nb_models, model_nt, list_neigh,split_point,2)
'SAVE THE LIST OF THE SUBGROUPS'
save_obj(L_Subgroups, path + 'list_subgroups')
###Output
_____no_output_____
###Markdown
Subgroups Descriptions
###Code
att_names = data_test_df.columns
data_test_means = scaler_test.mean_
data_test_stds = np.sqrt(scaler_test.var_)
patt_descriptions = patterns_sc(P,split_point,data_test_s,att_names,data_test_means,data_test_stds)
'SAVE THE SUBGROUPS PATTERNS'
save_obj(patt_descriptions, path + 'patterns')
save_obj(att_names, path + 'att_names')
###Output
_____no_output_____ |
code/intro_to_neural_networks.ipynb | ###Markdown
神经网络简介学习目标:- 使用TensorFlow的DNNRegressor定义神经网络及其隐藏层- 训练神经网络学习数据集中的非线性规律,得到比线性回归模型更好的效果我们将直接预测median_house_value 设置¶加载加州住房数据集。
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_df = pd.read_csv("https://download.mlcc.google.cn/mledu-datasets/california_housing_train.csv",
sep=',')
california_housing_df = california_housing_df.reindex(np.random.permutation(california_housing_df.index))
# 预处理特征
def preprocess_features(california_housing_df):
"""预处理房价的DataFrame,准备输入特征,添加人为特征
Args:
california_housing_df: 包含加州房价数据的df
Returns:
包含处理后特征的DataFrame
"""
selected_features = california_housing_df[["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# 创建额外的特征
processed_features["rooms_per_person"] = (california_housing_df["total_rooms"] / california_housing_df["population"])
return processed_features
# 预处理目标
def preprocess_targets(california_housing_df):
"""从加州房价DataFrame准备目标特征,即标签
Args:
california_housing_dataframe: 包含加州房价数据的df
Returns:
包含目标标签的df
"""
output_targets = pd.DataFrame()
# 将目标标签的值缩放
output_targets["median_house_value"] = (california_housing_df["median_house_value"] / 1000.0)
return output_targets
# 选择前12000/17000用于训练
training_examples = preprocess_features(california_housing_df.head(12000))
training_targets = preprocess_targets(california_housing_df.head(12000))
# 选择最后的5000用于验证
validation_examples = preprocess_features(california_housing_df.tail(5000))
validation_targets = preprocess_targets(california_housing_df.tail(5000))
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
Training examples summary:
###Markdown
构建神经网络使用DNNRegressor类定义。使用hidden_units定义网络结构,是一个整数列表,每一个整数对应一个隐藏层,数值表示节点数。默认情况下,所有隐藏层使用ReLU激活函数,且层间全连接。
###Code
def my_input_fn(features, targets, batch_size=1,shuffle=True, num_epochs=None):
"""使用多个特征训练一个线性回归器
Args:
features: 特征的DataFrame
targets: 目标的DataFrame
batch_size: 传递给模型的批大小
shuffle: 是否打乱数据
num_epochs: 数据重复的epochs数
Returns:
下一批数据元组(features, labels)
"""
# 转换DataFrame到numpy数组
features = {key:np.array(value) for key,value in dict(features).items()}
# 构建数据集
ds = Dataset.from_tensor_slices((features, targets))
ds = ds.batch(batch_size).repeat(num_epochs)
# 打乱数据
if shuffle:
ds = ds.shuffle(10000)
# 返回下一批数据
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def construct_feature_columns(input_features):
"""构建特征列
Args:
input_features: 数值特征的名字
Returns:
特征列集
"""
return set([tf.feature_column.numeric_column(my_feature) for my_feature in input_features])
def train_nn_regression_model(learning_rate,
steps,
batch_size,
hidden_units,
feature_columns,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""使用多个特征训练一个线性回归模型
"""
periods = 10
steps_per_period = steps / periods
# 定义优化器
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
# 创建一个线性回归器
linear_regressor = tf.estimator.DNNRegressor(feature_columns=feature_columns,
hidden_units=hidden_units,
optimizer=my_optimizer)
# 创建输入函数
training_input_fn = lambda: my_input_fn(training_examples,training_targets["median_house_value"], batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples, training_targets["median_house_value"], num_epochs=1, shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples, validation_targets["median_house_value"], num_epochs=1, shuffle=False)
# 训练模型,并在每个周期输出loss
print("Start training...")
print("RMSE (on training data): ")
training_rmse = []
validation_rmse = []
for period in range(0, periods):
linear_regressor.train(input_fn=training_input_fn, steps=steps_per_period)
# 计算预测
training_predictions = linear_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item["predictions"][0] for item in training_predictions])
validation_predictions = linear_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item["predictions"][0] for item in validation_predictions])
# 计算训练和验证的损失
training_root_mean_squared_error = math.sqrt(metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(metrics.mean_squared_error(validation_predictions, validation_targets))
# 输出结果
print("period %02d : %.2f" % (period, training_root_mean_squared_error))
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished!")
# 损失随周期变化图
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error via Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validaiton")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return linear_regressor
dnn_regressor = train_nn_regression_model(
learning_rate=0.001,
steps=2000,
batch_size=100,
hidden_units=[10, 10],
feature_columns=construct_feature_columns(training_examples),
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
Start training...
RMSE (on training data):
period 00 : 169.19
period 01 : 166.39
period 02 : 154.48
period 03 : 148.75
period 04 : 138.69
period 05 : 134.36
period 06 : 121.22
period 07 : 115.44
period 08 : 111.80
period 09 : 109.40
Model training finished!
Final RMSE (on training data): 109.40
Final RMSE (on validation data): 104.92
###Markdown
> **注意**:在本次练习中,参数的选择有点随意。我们尝试了越来越复杂的组合,并进行了较长时间的训练,直到误差降到目标之下。这决不是最佳组合;其他组合可能会获得更低的 RMSE。如果我们的目标是找到可以产生最小误差的模型,那么需要我们使用更严格的流程,例如**参数搜索**。
###Code
# 验证神经网络模型
california_housing_test_data = pd.read_csv("https://download.mlcc.google.cn/mledu-datasets/california_housing_test.csv", sep=",")
test_examples = preprocess_features(california_housing_test_data)
test_targets = preprocess_targets(california_housing_test_data)
predict_testing_input_fn = lambda: my_input_fn(test_examples,
test_targets["median_house_value"],
num_epochs=1,
shuffle=False)
test_predictions = dnn_regressor.predict(input_fn=predict_testing_input_fn)
test_predictions = np.array([item['predictions'][0] for item in test_predictions])
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(test_predictions, test_targets))
print("Final RMSE (on test data): %0.2f" % root_mean_squared_error)
###Output
Final RMSE (on test data): 107.72
|
homework/Homework01.ipynb | ###Markdown
**1**. (25 points)The following iterative sequence is defined for the set of positive integers:- n → n/2 (n is even)- n → 3n + 1 (n is odd)Using the rule above and starting with 13, we generate the following sequence:13 → 40 → 20 → 10 → 5 → 16 → 8 → 4 → 2 → 1It can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although it has not been proved yet (Collatz Problem), it is thought that all starting numbers finish at 1.Which starting number, under one million, produces the longest chain?NOTE: Once the chain starts the terms are allowed to go above one million.
###Code
def chain_len(i):
l = 1
while i != 1:
if i % 2 == 0:
i, l = i / 2, l + 1
else:
i, l = (3 * i + 1) / 2, l + 2 # combine steps to save on loops
return l
def max_len(b):
l, max_i, max_l= 0, 0, 0
for i in range(1, int(b)):
if (l := chain_len(i)) > max_l:
max_i, max_l = i, l
return max_i
print("Number that produces longest chain:", max_len(1e6))
###Output
Number that produces longest chain: 837799
###Markdown
**2** (25 points)- Perform the median polish to calculate just the *residuals* for this [example](https://mgimond.github.io/ES218/Week11a.html) in Python. - Use the matrix `xs` provided- Display the final result after 3 iterations to 1 decimal place and check if it agrees with 
###Code
xs = np.array([
(25.3,32.1,38.8,25.4),
(25.3,29,31,21.1),
(18.2,18.8,19.3,20.3),
(18.3,24.3,15.7,24),
(16.3,19,16.8,17.5)
]).T
res = xs - np.median(xs)
def effects(a, axis):
med = np.median(a, axis=axis)
a -= np.expand_dims(med, axis=axis)
return a
for _ in range(3):
res = effects(res, 1)
res = effects(res, 0)
print("Calculated residuals:")
np.round(res, 1)
###Output
Calculated residuals:
###Markdown
**3**. (50 points)A Caesar cipher is a very simple method of encoding and decoding data. The cipher simply replaces characters with the character offset by $k$ places. For example, if the offset is 3, we replace `a` with `d`, `b` with `e` etc. The cipher wraps around so we replace `y` with `b`, `z` with `c` and so on. Punctuation, spaces and numbers are left unchanged.- Write a function `encode` that takes as arguments a string and an integer offset and returns the encoded cipher.- Write a function `decode` that takes as arguments a cipher and an integer offset and returns the decoded string. - Write a function `auto_decode` that takes as argument a cipher and uses a statistical method to guess the optimal offset to decode the cipher, assuming the original string is in English which has the following letter frequency:```pythonfreq = { 'a': 0.08167, 'b': 0.01492, 'c': 0.02782, 'd': 0.04253, 'e': 0.12702, 'f': 0.02228, 'g': 0.02015, 'h': 0.06094, 'i': 0.06966, 'j': 0.00153, 'k': 0.00772, 'l': 0.04025, 'm': 0.02406, 'n': 0.06749, 'o': 0.07507, 'p': 0.01929, 'q': 0.00095, 'r': 0.05987, 's': 0.06327, 't': 0.09056, 'u': 0.02758, 'v': 0.00978, 'w': 0.0236, 'x': 0.0015, 'y': 0.01974, 'z': 0.00074}```- Encode the following nursery rhyme using a random offset from 10 to 20, then recover the original using `auto_decode`:```textBaa, baa, black sheep,Have you any wool?Yes, sir, yes, sir,Three bags full;One for the master,And one for the dame,And one for the little boyWho lives down the lane.```
###Code
def encode(txt, off):
# shift lowercase by offset
lowercase = string.ascii_lowercase
lwr_shift = lowercase[off:] + lowercase[:off]
# shift uppercase by offset
uppercase = string.ascii_uppercase
upr_shift = uppercase[off:] + uppercase[:off]
# generate translation table
translate = txt.maketrans(
lowercase + uppercase, lwr_shift + upr_shift
)
return txt.translate(translate)
def decode(txt, off):
return encode(txt, -off)
def auto_decode(txt):
freq = {
'a': 0.08167,
'b': 0.01492,
'c': 0.02782,
'd': 0.04253,
'e': 0.12702,
'f': 0.02228,
'g': 0.02015,
'h': 0.06094,
'i': 0.06966,
'j': 0.00153,
'k': 0.00772,
'l': 0.04025,
'm': 0.02406,
'n': 0.06749,
'o': 0.07507,
'p': 0.01929,
'q': 0.00095,
'r': 0.05987,
's': 0.06327,
't': 0.09056,
'u': 0.02758,
'v': 0.00978,
'w': 0.0236,
'x': 0.0015,
'y': 0.01974,
'z': 0.00074
}
min_diff = 1e6 # minimum diff between measured and actual freq
best_off = 0 # offset with lowest diff
# for each offset
for off in range(26):
# count letters in encoded text
dec = decode(txt, off)
count = Counter(c for c in dec.lower() if c.isalpha()) # count letters only
num_char = sum(count.values()) # number of letters
# sum absolute error
diff = 0.
for key, value in count.items():
diff += abs(freq[key] - value / num_char) # compare relative freq
# lowest error => new best offset
if diff < min_diff:
min_diff, best_off = diff, off
return decode(txt, best_off)
test_txt = """Baa, baa, black sheep,
Have you any wool?
Yes, sir, yes, sir,
Three bags full;
One for the master,
And one for the dame,
And one for the little boy
Who lives down the lane."""
rand_off = random.randint(10, 20)
test_enc = encode(test_txt, rand_off)
print(test_enc, '\n')
test_dec = auto_decode(test_enc)
print(test_dec)
###Output
Utt, utt, uetvd laxxi,
Atox rhn tgr phhe?
Rxl, lbk, rxl, lbk,
Makxx utzl ynee;
Hgx yhk max ftlmxk,
Tgw hgx yhk max wtfx,
Tgw hgx yhk max ebmmex uhr
Pah eboxl whpg max etgx.
Baa, baa, black sheep,
Have you any wool?
Yes, sir, yes, sir,
Three bags full;
One for the master,
And one for the dame,
And one for the little boy
Who lives down the lane.
###Markdown
Homework 01: Python PracticeThis is meant to get you up to speed with the level of skill in Python that we expect for BIOS 823. You can use online resources, but avoid copy and paste as you will not learn that way. Instead, try to understand the reference/tutorial/example found, then close the browser and try to re-code it yourself. **1**. (25 points)In this exercise, we will practice using Pandas dataframes to explore and summarize a data set `heart`.This data contains the survival time after receiving a heart transplant, the age of the patient and whether or not the survival time was censored- Number of Observations - 69- Number of Variables - 3Variable name definitions::- survival - Days after surgery until death- censors - indicates if an observation is censored. 1 is uncensored- age - age at the time of surgeryAnswer the following questions (5 points each) with respect to the `heart` data set:- How many patients were censored?- What is the correlation coefficient between age and survival for uncensored patients? - What is the average age for censored and uncensored patients?- What is the average survival time for censored and uncensored patients under the age of 45?- What is the survival time of the youngest and oldest uncensored patient?
###Code
import statsmodels.api as sm
heart = sm.datasets.heart.load_pandas().data
heart.head(n=6)
###Output
_____no_output_____
###Markdown
**2**. (25 points)Build a predictive model to guess the species of an iris flower by its measurements. Split the data set provided into 2/3 training and 1/3 test examples using a random splitting strategy. Fit a `sklearn.neighbors.KNeighborsClassifier` to the training data (you can use the default parameters). Generate the $3 \times 3$ confusion matrix for the model evaluated on the test data.
###Code
from sklearn import datasets
iris = datasets.load_iris()
iris.keys()
###Output
_____no_output_____
###Markdown
**3**. (50 points)Write code to generate a plot similar to those shown below using the explanation for generation of 1D Cellular Automata found [here](http://mathworld.wolfram.com/ElementaryCellularAutomaton.html). You should only need to use standard Python, `numpy` and `matplotllib`.The input to the function making the plots should be a simple list of rules```pythonrules = [30, 54, 60, 62, 90, 94, 102, 110, 122, 126, 150, 158, 182, 188, 190, 220, 222, 250]make_plots(rules, niter, ncols)```You may, of course, write other helper functions to keep your code modular.A plotting function is provided so you only need to code the two functions above.
###Code
%matplotlib inline
from matplotlib.ticker import NullFormatter, IndexLocator
import matplotlib.pyplot as plt
def plot_grid(rule, grid, ax=None):
"""Plot a single grid."""
if ax is None:
ax = plt.subplot(111)
with plt.style.context('seaborn-white'):
ax.grid(True, which='major', color='grey', linewidth=0.5)
ax.imshow(grid, interpolation='none', cmap='Greys', aspect=1, alpha=0.8)
ax.xaxis.set_major_locator(IndexLocator(1, 0))
ax.yaxis.set_major_locator(IndexLocator(1, 0))
ax.xaxis.set_major_formatter( NullFormatter() )
ax.yaxis.set_major_formatter( NullFormatter() )
ax.set_title('Rule %d' % rule)
###Output
_____no_output_____
###Markdown
Homework 01 **1**. (25 points)The code below gives five "documents" with titles in `titles` and text in `contents`. - Convert each text into "words" by converting to lower case, removing punctuation and splitting on whitespace- Make a list of all unique "words" in any of the texts- Create an pandas DataFrame whose rows are words, columns are titles, and values are counts of the word in the document- Add a column `total` that counts the total number of occurrences for each word across all documents- Show the rows for the 5 most commonly used words
###Code
import sklearn
from sklearn.datasets import fetch_20newsgroups
twenty = fetch_20newsgroups(subset='train')
target_names = twenty['target_names']
titles = [target_names[i] for i in twenty['target'][2:7]]
contents = twenty['data'][2:7]
###Output
_____no_output_____
###Markdown
**2**. (75 points)A Caesar cipher is a very simple method of encoding and decoding data. The cipher simply replaces characters with the character offset by $k$ places. For example, if the offset is 3, we replace `a` with `d`, `b` with `e` etc. The cipher wraps around so we replace `y` with `b`, `z` with `c` and so on. Punctuation, spaces and numbers are left unchanged.- Write a function `encode` that takes as arguments a string and an integer offset and returns the encoded cipher.- Write a function `decode` that takes as arguments a cipher and an integer offset and returns the decoded string. - Write a function `auto_decode` that takes as argument a cipher and uses a statistical method to guess the optimal offset to decode the cipher, assuming the original string is in English which has the following letter frequency:```pythonfreq = { 'a': 0.08167, 'b': 0.01492, 'c': 0.02782, 'd': 0.04253, 'e': 0.12702, 'f': 0.02228, 'g': 0.02015, 'h': 0.06094, 'i': 0.06966, 'j': 0.00153, 'k': 0.00772, 'l': 0.04025, 'm': 0.02406, 'n': 0.06749, 'o': 0.07507, 'p': 0.01929, 'q': 0.00095, 'r': 0.05987, 's': 0.06327, 't': 0.09056, 'u': 0.02758, 'v': 0.00978, 'w': 0.0236, 'x': 0.0015, 'y': 0.01974, 'z': 0.00074}```- Encode the following nursery rhyme using a random offset from 10 to 20, then recover the original using `auto_decode`:```textBaa, baa, black sheep,Have you any wool?Yes, sir, yes, sir,Three bags full;One for the master,And one for the dame,And one for the little boyWho lives down the lane.```
###Code
###Output
_____no_output_____
###Markdown
**1**. (25 points)The following iterative sequence is defined for the set of positive integers:- n → n/2 (n is even)- n → 3n + 1 (n is odd)Using the rule above and starting with 13, we generate the following sequence:13 → 40 → 20 → 10 → 5 → 16 → 8 → 4 → 2 → 1It can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although it has not been proved yet (Collatz Problem), it is thought that all starting numbers finish at 1.Which starting number, under one million, produces the longest chain?NOTE: Once the chain starts the terms are allowed to go above one million.
###Code
###Output
_____no_output_____
###Markdown
**2** (25 points)- Perform the median polish to calculate just the *residuals* for this [example](https://mgimond.github.io/ES218/Week11a.html) in Python. - Use the matrix `xs` provided- Display the final result after 3 iterations to 1 decimal place and check if it agrees with 
###Code
xs = np.array([
(25.3,32.1,38.8,25.4),
(25.3,29,31,21.1),
(18.2,18.8,19.3,20.3),
(18.3,24.3,15.7,24),
(16.3,19,16.8,17.5)
]).T
###Output
_____no_output_____
###Markdown
**3**. (50 points)A Caesar cipher is a very simple method of encoding and decoding data. The cipher simply replaces characters with the character offset by $k$ places. For example, if the offset is 3, we replace `a` with `d`, `b` with `e` etc. The cipher wraps around so we replace `y` with `b`, `z` with `c` and so on. Punctuation, spaces and numbers are left unchanged.- Write a function `encode` that takes as arguments a string and an integer offset and returns the encoded cipher.- Write a function `decode` that takes as arguments a cipher and an integer offset and returns the decoded string. - Write a function `auto_decode` that takes as argument a cipher and uses a statistical method to guess the optimal offset to decode the cipher, assuming the original string is in English which has the following letter frequency:```pythonfreq = { 'a': 0.08167, 'b': 0.01492, 'c': 0.02782, 'd': 0.04253, 'e': 0.12702, 'f': 0.02228, 'g': 0.02015, 'h': 0.06094, 'i': 0.06966, 'j': 0.00153, 'k': 0.00772, 'l': 0.04025, 'm': 0.02406, 'n': 0.06749, 'o': 0.07507, 'p': 0.01929, 'q': 0.00095, 'r': 0.05987, 's': 0.06327, 't': 0.09056, 'u': 0.02758, 'v': 0.00978, 'w': 0.0236, 'x': 0.0015, 'y': 0.01974, 'z': 0.00074}```- Encode the following nursery rhyme using a random offset from 10 to 20, then recover the original using `auto_decode`:```textBaa, baa, black sheep,Have you any wool?Yes, sir, yes, sir,Three bags full;One for the master,And one for the dame,And one for the little boyWho lives down the lane.```
###Code
###Output
_____no_output_____
###Markdown
Homework 01 **1**. (25 points)The code below gives five "documents" with titles in `titles` and text in `contents`. - Convert each text into "words" by converting to lower case, removing punctuation and splitting on whitespace- Make a list of all unique "words" in any of the texts- Create an pandas DataFrame whose rows are words, columns are titles, and values are counts of the word in the document- Add a column `total` that counts the total number of occurrences for each word across all documents- Show the rows for the 5 most commonly used words
###Code
import sklearn
from sklearn.datasets import fetch_20newsgroups
twenty = fetch_20newsgroups(subset='train')
target_names = twenty['target_names']
titles = [target_names[i] for i in twenty['target'][2:7]]
contents = twenty['data'][2:7]
import string
texts = []
for i in range(len(contents)):
c = contents[i].lower()
texts.append(c.translate(str.maketrans('', '', string.punctuation))) # remove punctuation
# texts.append(c.translate(str.maketrans(string.punctuation, ' ' * len(string.punctuation)))) # substitute each punctuation with white space
# print(texts[1]) # 'texts' stores texts in lower case removing punctuation
wordsList = [text.split() for text in texts] # list of list of words in each document
vocab = list(set([word for words in wordsList for word in words])) # list of all unique words in any of the texts
# vocab
import numpy as np
import pandas as pd
df = pd.DataFrame(np.zeros((len(vocab), len(titles)), dtype = int), index = vocab, columns = titles)
for doc_idx in range(len(wordsList)):
for word in wordsList[doc_idx]:
df[titles[doc_idx]][word] += 1
total = df.sum(axis=1)
df['total'] = total
# print(df)
total_np = np.array(total)
total_sorted_idx = list(np.argsort(total_np)[::-1][:5])
df.loc[[vocab[idx] for idx in total_sorted_idx], :]
###Output
_____no_output_____
###Markdown
**2**. (75 points)A Caesar cipher is a very simple method of encoding and decoding data. The cipher simply replaces characters with the character offset by $k$ places. For example, if the offset is 3, we replace `a` with `d`, `b` with `e` etc. The cipher wraps around so we replace `y` with `b`, `z` with `c` and so on. Punctuation, spaces and numbers are left unchanged.- Write a function `encode` that takes as arguments a string and an integer offset and returns the encoded cipher.- Write a function `decode` that takes as arguments a cipher and an integer offset and returns the decoded string. - Write a function `auto_decode` that takes as argument a cipher and uses a statistical method to guess the optimal offset to decode the cipher, assuming the original string is in English which has the following letter frequency:```pythonfreq = { 'a': 0.08167, 'b': 0.01492, 'c': 0.02782, 'd': 0.04253, 'e': 0.12702, 'f': 0.02228, 'g': 0.02015, 'h': 0.06094, 'i': 0.06966, 'j': 0.00153, 'k': 0.00772, 'l': 0.04025, 'm': 0.02406, 'n': 0.06749, 'o': 0.07507, 'p': 0.01929, 'q': 0.00095, 'r': 0.05987, 's': 0.06327, 't': 0.09056, 'u': 0.02758, 'v': 0.00978, 'w': 0.0236, 'x': 0.0015, 'y': 0.01974, 'z': 0.00074}```- Encode the following nursery rhyme using a random offset from 10 to 20, then recover the original using `auto_decode`:```textBaa, baa, black sheep,Have you any wool?Yes, sir, yes, sir,Three bags full;One for the master,And one for the dame,And one for the little boyWho lives down the lane.```
###Code
def encode(input_str, offset):
input_str = input_str.lower()
output_cipher = ''
lower_dict = dict(zip(string.ascii_lowercase, range(26))) # alphabet as key, index as value
for i in range(len(input_str)):
if input_str[i] in string.ascii_lowercase:
offset_idx = lower_dict[input_str[i]] + offset
offset_idx -= int(offset_idx/26) * 26 # maintain the index in range [-25:25] to ensure valid
output_cipher += string.ascii_lowercase[offset_idx]
else:
output_cipher += input_str[i]
return output_cipher
def decode(input_cipher, offset):
output_str = ''
lower_dict = dict(zip(string.ascii_lowercase, range(26))) # alphabet as key, index as value
for i in range(len(input_cipher)):
if input_cipher[i] in string.ascii_lowercase:
offset_idx = lower_dict[input_cipher[i]] - offset
offset_idx -= int(offset_idx/26) * 26 # maintain the index in range [-25:25] to ensure valid
output_str += string.ascii_lowercase[offset_idx]
else:
output_str += input_cipher[i]
return output_str
freq = {
'a': 0.08167,
'b': 0.01492,
'c': 0.02782,
'd': 0.04253,
'e': 0.12702,
'f': 0.02228,
'g': 0.02015,
'h': 0.06094,
'i': 0.06966,
'j': 0.00153,
'k': 0.00772,
'l': 0.04025,
'm': 0.02406,
'n': 0.06749,
'o': 0.07507,
'p': 0.01929,
'q': 0.00095,
'r': 0.05987,
's': 0.06327,
't': 0.09056,
'u': 0.02758,
'v': 0.00978,
'w': 0.0236,
'x': 0.0015,
'y': 0.01974,
'z': 0.00074
}
def auto_decode(input_cipher):
optimal_offset = 0
optimal_str = ''
count_list = [0] * 26
lower_dict = dict(zip(string.ascii_lowercase, range(26)))
for char in input_cipher:
if char in string.ascii_lowercase:
count_list[lower_dict[char]] += 1
sum_count = sum(count_list)
count_list += count_list # repeat the list to allow access
se_list = [0] * 26
for i in range(1, 26): # iterate all offsets
shifted_count_list = [count_list[k] for k in range(i, 26 + i)] # count_list decoded with current offset
freq_list = [shifted_count_list[m]/sum_count for m in range(26)] # frequency of each letter in current string
se_list[i] = sum([abs(freq_list[n]**2 - freq[list(freq.keys())[n]]**2) for n in range(26)])
optimal_offset = np.argsort(np.array(se_list))[1]
optimal_str = decode(input_cipher, optimal_offset)
return optimal_str, optimal_offset
import random as rand
rhyme = """Baa, baa, black sheep,
Have you any wool?
Yes, sir, yes, sir,
Three bags full;
One for the master,
And one for the dame,
And one for the little boy
Who lives down the lane."""
encoded = encode(rhyme, rand.randint(10,20))
origin_str, opt_offset = auto_decode(encoded)
print(origin_str)
print(opt_offset)
###Output
baa, baa, black sheep,
have you any wool?
yes, sir, yes, sir,
three bags full;
one for the master,
and one for the dame,
and one for the little boy
who lives down the lane.
19
|
lab6.ipynb | ###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
_____no_output_____
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp6.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=0'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp6.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
View the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from gp6.indeed group by job_title order by count desc',conn)
df[:]
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from gp6.indeed group by job_title order by count desc ', conn)
df.plot.bar(x='job_title')
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
!pip install psycopg2
!pip install pandas
import pandas
import configparser
import psycopg2
###Output
_____no_output_____
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp9.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=0'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
print(td_resultsCol)
###Output
<td id="resultsCol">
<div id="resultsColTopSpace"></div>
<div class="messageContainer">
<script type="text/javascript">
function setRefineByCookie(refineByTypes) {
var expires = new Date();
expires.setTime(expires.getTime() + (10 * 1000));
for (var i = 0; i < refineByTypes.length; i++) {
setCookie(refineByTypes[i], "1", expires);
}
}
</script>
</div>
<style type="text/css">
#increased_radius_result {
font-size: 16px;
font-style: italic;
}
#original_radius_result{
font-size: 13px;
font-style: italic;
color: #666666;
}
</style>
<div class="resultsTop"><div class="mosaic-zone" id="mosaic-zone-aboveJobCards"><div class="mosaic mosaic-provider-serpreportjob" id="mosaic-provider-serpreportjob"><span><div class="mosaic-reportcontent-content"></div></span></div></div><script type="text/javascript">
try {
window.mosaic.onMosaicApiReady(function() {
var zoneId = 'aboveJobCards';
var providers = window.mosaic.zonedProviders[zoneId];
if (providers) {
providers.filter(function(p) { return window.mosaic.lazyFns[p]; }).forEach(function(p) {
return window.mosaic.api.loadProvider(p);
});
}
});
} catch (e) {};
</script><div data-tn-section="resumePromo" id="resumePromo">
<a aria-hidden="true" href="/promo/resume" onclick="this.href = appendParamsOnce( this.href, '?from=serptop3&subfrom=resprmrtop&trk.origin=jobsearch&trk.variant=resprmrtop&trk.tk=1eks6bfsdp7om800')" tabindex="-1"><span aria-label="post resume icon" class="new-ico" role="img"></span></a> <a class="resume-promo-link" href="/promo/resume" onclick="this.href = appendParamsOnce( this.href, '?from=serptop3&subfrom=resprmrtop&trk.origin=jobsearch&trk.variant=resprmrtop&trk.tk=1eks6bfsdp7om800')"><b>Upload your resume</b></a> - Let employers find you</div><h1 class="currentSearchLabel-a11y-contrast-color" id="jobsInLocation">
intelligence analyst jobs</h1><div class="secondRow">
<div class="serp-filters-sort-by-container">
<span class="serp-filters-sort-by-label">Sort by: </span>
<span class="no-wrap"><b>relevance</b> -
<a href="/jobs?q=intelligence+analyst&sort=date" rel="nofollow">date</a></span>
</div><div class="searchCountContainer">
<div class="searchCount-a11y-contrast-color" id="searchCount">
<div id="searchCountPages">
Page 2 of 24,365 jobs</div>
<div class="serp-relevance-explanation"><button aria-label="help icon" class="serp-relevance-explanation-helpIcon serp-helpIcon" type="button"><svg height="16" width="16" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><defs><lineargradient id="helpIcon-a" x1="50%" x2="50%" y1="0%" y2="100%"><stop offset="0%" stop-color="#FFF" stop-opacity=".5"></stop><stop offset="100%" stop-opacity=".5"></stop></lineargradient><lineargradient id="helpIcon-b" x1="50%" x2="50%" y1="0%" y2="100%"><stop offset="0%" stop-opacity=".5"></stop><stop offset="100%" stop-opacity=".5"></stop></lineargradient><path d="M7.1537 3.391C8.373 3.4665 9.3466 4.44 9.4223 5.6594 9.4886 6.7088 8.8736 7.6823 7.9 8.0702c-.1413.0563-.2358.1796-.2358.321v.6619h-1.324v-.662c0-.6894.4162-1.2944 1.0687-1.5497.4442-.1795.7283-.6244.6995-1.0968-.0382-.548-.4824-.9922-1.0304-1.0304-.3116-.0282-.605.085-.8315.2934-.2271.2077-.3504.4911-.3504.8034v.662H4.5728v-.662c0-.662.2834-1.3146.7658-1.7682.4911-.463 1.1343-.6995 1.815-.6519zM6.33 10.22c0-.368.2586-.6649.6606-.6683.004 0 .0047-.002.006-.002h.0114v.004c.412.0157.662.3064.662.6656-.0087.3736-.2566.6595-.662.667-.0013.0034-.0033.002-.0053.002-.0034 0-.006.0014-.008.0014-.0027 0-.0027-.0014-.004-.0014-.4-.0142-.6607-.2981-.6607-.6683zM1.6407 7c0-2.9554 2.4046-5.36 5.36-5.36 2.9553 0 5.36 2.4046 5.36 5.36 0 2.9554-2.4047 5.36-5.36 5.36-2.9554 0-5.36-2.4046-5.36-5.36zM.3 7c0 3.6997 3.0003 6.7 6.7 6.7 3.7004 0 6.7-3.0003 6.7-6.7C13.7 3.2996 10.7004.3 7 .3 3.3003.3.3 3.2996.3 7z" id="helpIcon-c"></path></defs><g fill="none" fill-rule="evenodd"><g fill-rule="nonzero"><path d="M8.1537 4.391c1.2194.0756 2.1929 1.0491 2.2686 2.2685.0663 1.0493-.5487 2.0228-1.5223 2.4107-.1413.0563-.2358.1796-.2358.321v.6619h-1.324v-.662c0-.6894.4162-1.2944 1.0687-1.5497.4442-.1795.7283-.6244.6995-1.0968-.0382-.548-.4824-.9922-1.0304-1.0304-.3116-.0282-.605.085-.8315.2934-.2271.2077-.3504.4911-.3504.8034v.662H5.5728v-.662c0-.662.2834-1.3146.7658-1.7682.4911-.463 1.1343-.6995 1.815-.6519zM7.33 11.22c0-.368.2586-.6649.6606-.6683.004 0 .0047-.002.006-.002h.0114v.004c.412.0157.662.3064.662.6656-.0087.3736-.2566.6595-.662.667-.0013.0034-.0033.002-.0053.002-.0034 0-.006.0014-.008.0014-.0027 0-.0027-.0014-.004-.0014-.4-.0142-.6607-.2981-.6607-.6683zM2.6407 8c0-2.9554 2.4046-5.36 5.36-5.36 2.9553 0 5.36 2.4046 5.36 5.36 0 2.9554-2.4047 5.36-5.36 5.36-2.9554 0-5.36-2.4046-5.36-5.36zM1.3 8c0 3.6997 3.0003 6.7 6.7 6.7 3.7004 0 6.7-3.0003 6.7-6.7 0-3.7004-2.9996-6.7-6.7-6.7-3.6997 0-6.7 2.9996-6.7 6.7z" fill="#D8D8D8"></path><path d="M7.1537 3.391C8.373 3.4665 9.3466 4.44 9.4223 5.6594 9.4886 6.7088 8.8736 7.6823 7.9 8.0702c-.1413.0563-.2358.1796-.2358.321v.6619h-1.324v-.662c0-.6894.4162-1.2944 1.0687-1.5497.4442-.1795.7283-.6244.6995-1.0968-.0382-.548-.4824-.9922-1.0304-1.0304-.3116-.0282-.605.085-.8315.2934-.2271.2077-.3504.4911-.3504.8034v.662H4.5728v-.662c0-.662.2834-1.3146.7658-1.7682.4911-.463 1.1343-.6995 1.815-.6519zM6.33 10.22c0-.368.2586-.6649.6606-.6683.004 0 .0047-.002.006-.002h.0114v.004c.412.0157.662.3064.662.6656-.0087.3736-.2566.6595-.662.667-.0013.0034-.0033.002-.0053.002-.0034 0-.006.0014-.008.0014-.0027 0-.0027-.0014-.004-.0014-.4-.0142-.6607-.2981-.6607-.6683zM1.6407 7c0-2.9554 2.4046-5.36 5.36-5.36 2.9553 0 5.36 2.4046 5.36 5.36 0 2.9554-2.4047 5.36-5.36 5.36-2.9554 0-5.36-2.4046-5.36-5.36zM.3 7c0 3.6997 3.0003 6.7 6.7 6.7 3.7004 0 6.7-3.0003 6.7-6.7C13.7 3.2996 10.7004.3 7 .3 3.3003.3.3 3.2996.3 7z" fill="url(#helpIcon-a)" transform="translate(1 1)"></path><path d="M7.1537 3.391C8.373 3.4665 9.3466 4.44 9.4223 5.6594 9.4886 6.7088 8.8736 7.6823 7.9 8.0702c-.1413.0563-.2358.1796-.2358.321v.6619h-1.324v-.662c0-.6894.4162-1.2944 1.0687-1.5497.4442-.1795.7283-.6244.6995-1.0968-.0382-.548-.4824-.9922-1.0304-1.0304-.3116-.0282-.605.085-.8315.2934-.2271.2077-.3504.4911-.3504.8034v.662H4.5728v-.662c0-.662.2834-1.3146.7658-1.7682.4911-.463 1.1343-.6995 1.815-.6519zM6.33 10.22c0-.368.2586-.6649.6606-.6683.004 0 .0047-.002.006-.002h.0114v.004c.412.0157.662.3064.662.6656-.0087.3736-.2566.6595-.662.667-.0013.0034-.0033.002-.0053.002-.0034 0-.006.0014-.008.0014-.0027 0-.0027-.0014-.004-.0014-.4-.0142-.6607-.2981-.6607-.6683zM1.6407 7c0-2.9554 2.4046-5.36 5.36-5.36 2.9553 0 5.36 2.4046 5.36 5.36 0 2.9554-2.4047 5.36-5.36 5.36-2.9554 0-5.36-2.4046-5.36-5.36zM.3 7c0 3.6997 3.0003 6.7 6.7 6.7 3.7004 0 6.7-3.0003 6.7-6.7C13.7 3.2996 10.7004.3 7 .3 3.3003.3.3 3.2996.3 7z" fill="url(#helpIcon-a)" transform="translate(1 1)"></path><path d="M7.1537 3.391C8.373 3.4665 9.3466 4.44 9.4223 5.6594 9.4886 6.7088 8.8736 7.6823 7.9 8.0702c-.1413.0563-.2358.1796-.2358.321v.6619h-1.324v-.662c0-.6894.4162-1.2944 1.0687-1.5497.4442-.1795.7283-.6244.6995-1.0968-.0382-.548-.4824-.9922-1.0304-1.0304-.3116-.0282-.605.085-.8315.2934-.2271.2077-.3504.4911-.3504.8034v.662H4.5728v-.662c0-.662.2834-1.3146.7658-1.7682.4911-.463 1.1343-.6995 1.815-.6519zM6.33 10.22c0-.368.2586-.6649.6606-.6683.004 0 .0047-.002.006-.002h.0114v.004c.412.0157.662.3064.662.6656-.0087.3736-.2566.6595-.662.667-.0013.0034-.0033.002-.0053.002-.0034 0-.006.0014-.008.0014-.0027 0-.0027-.0014-.004-.0014-.4-.0142-.6607-.2981-.6607-.6683zM1.6407 7c0-2.9554 2.4046-5.36 5.36-5.36 2.9553 0 5.36 2.4046 5.36 5.36 0 2.9554-2.4047 5.36-5.36 5.36-2.9554 0-5.36-2.4046-5.36-5.36zM.3 7c0 3.6997 3.0003 6.7 6.7 6.7 3.7004 0 6.7-3.0003 6.7-6.7C13.7 3.2996 10.7004.3 7 .3 3.3003.3.3 3.2996.3 7z" fill="url(#helpIcon-b)" transform="translate(1 1)"></path></g><g transform="translate(1 1)"><mask fill="#fff" id="helpIcon-d"><use xlink:href="#helpIcon-c"></use></mask><g mask="url(#helpIcon-d)"><path d="M-1-1h16v16H-1z" fill="#6F6F6F" fill-rule="nonzero"></path></g></g></g></svg></button><div class="serp-relevance-explanation-tooltip hidden"><div aria-labelledby="callout-heading-478280227" class="icl-Callout icl-Callout--caretEnd" role="alert"><div class="icl-Callout-header"><h3 class="icl-Callout-heading" id="callout-heading-478280227"></h3><a class="icl-CloseButton icl-Callout-close"><svg aria-label="dismiss-tooltip" class="icl-Icon icl-Icon--sm icl-Icon--black close" role="img"><g><path d="M14.53,4.53L13.47,3.47,9,7.94,4.53,3.47,3.47,4.53,7.94,9,3.47,13.47l1.06,1.06L9,10.06l4.47,4.47,1.06-1.06L10.06,9Z"></path></g></svg></a></div><div class="icl-Callout-content"><div class="jobsearch-ResultsInfo-text">Displayed here are Job Ads that match your query. Indeed may be compensated by these employers, helping keep Indeed free for jobseekers. Indeed ranks Job Ads based on a combination of employer bids and relevance, such as your search terms and other activity on Indeed. For more information, see the <a href="//www.indeed.com/legal?hl=en#tosIntro">Indeed Terms of Service</a></div></div></div></div></div></div>
</div></div>
</div>
<a id="jobPostingsAnchor" tabindex="-1"></a>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="d435de34df074d3a" data-tn-component="organicJob" id="p_d435de34df074d3a">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=d435de34df074d3a&fccid=988d72635eb868b4&vjs=3" id="jl_d435de34df074d3a" onclick="setRefineByCookie([]); return rclk(this,jobmap[0],true,0);" onmousedown="return rclk(this,jobmap[0],0);" rel="noopener nofollow" target="_blank" title="Intelligence Analyst Intern">
<b>Intelligence</b> <b>Analyst</b> Intern</a>
</h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/Everbridge" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=d435de34df074d3a&jcid=988d72635eb868b4')" rel="noopener" target="_blank">
Everbridge</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/Everbridge/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Intelligence+Analyst+Intern&fromjk=d435de34df074d3a&jcid=988d72635eb868b4');" rel="noopener" target="_blank" title="Everbridge reviews">
<span class="ratingsContent">
3.4<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="United States" id="recJobLoc_d435de34df074d3a" style="display: none"></div>
<span class="location accessible-contrast-color-location">United States</span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>Conduct research into existing risk <b>intelligence</b> events that relate to COVID-19 by using tools and performing additional open-source research.</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">30+ days ago</span><span class="tt_set" id="tt_set_0"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('d435de34df074d3a', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('d435de34df074d3a', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, 'd435de34df074d3a', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('d435de34df074d3a');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_d435de34df074d3a" onclick="changeJobState('d435de34df074d3a', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_0" onclick="toggleMoreLinks('d435de34df074d3a', '0'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_d435de34df074d3a" style="display:none;"></div><script>if (!window['result_d435de34df074d3a']) {window['result_d435de34df074d3a'] = {};}window['result_d435de34df074d3a']['showSource'] = false; window['result_d435de34df074d3a']['source'] = "Everbridge"; window['result_d435de34df074d3a']['loggedIn'] = false; window['result_d435de34df074d3a']['showMyJobsLinks'] = false;window['result_d435de34df074d3a']['undoAction'] = "unsave";window['result_d435de34df074d3a']['relativeJobAge'] = "30+ days ago";window['result_d435de34df074d3a']['jobKey'] = "d435de34df074d3a"; window['result_d435de34df074d3a']['myIndeedAvailable'] = true; window['result_d435de34df074d3a']['showMoreActionsLink'] = window['result_d435de34df074d3a']['showMoreActionsLink'] || true; window['result_d435de34df074d3a']['resultNumber'] = 0; window['result_d435de34df074d3a']['jobStateChangedToSaved'] = false; window['result_d435de34df074d3a']['searchState'] = "q=intelligence analyst&start=2"; window['result_d435de34df074d3a']['basicPermaLink'] = "https://www.indeed.com"; window['result_d435de34df074d3a']['saveJobFailed'] = false; window['result_d435de34df074d3a']['removeJobFailed'] = false; window['result_d435de34df074d3a']['requestPending'] = false; window['result_d435de34df074d3a']['notesEnabled'] = true; window['result_d435de34df074d3a']['currentPage'] = "serp"; window['result_d435de34df074d3a']['sponsored'] = false;window['result_d435de34df074d3a']['reportJobButtonEnabled'] = false; window['result_d435de34df074d3a']['showMyJobsHired'] = false; window['result_d435de34df074d3a']['showSaveForSponsored'] = false; window['result_d435de34df074d3a']['showJobAge'] = true; window['result_d435de34df074d3a']['showHolisticCard'] = true; window['result_d435de34df074d3a']['showDislike'] = true; window['result_d435de34df074d3a']['showKebab'] = true; window['result_d435de34df074d3a']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_0" style="display:none;"><div class="more_actions" id="more_0"><ul><li><span class="mat">View all <a href="/q-Everbridge-l-United-States-jobs.html">Everbridge jobs in United States</a> - <a href="/l-United-States-jobs.html">United States jobs</a></span></li><li><span class="mat">Salary Search: <a href="/salaries/business-intelligence-analyst-intern-Salaries,-US" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=serp-more&fromjk=d435de34df074d3a&from=serp-more');">Business Intelligence Analyst Intern salaries in United States</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/Everbridge/about" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=d435de34df074d3a&from=serp-more&campaignid=serp-more&jcid=988d72635eb868b4');">Everbridge</a></span></li><li><span class="mat">See popular <a href="/cmp/Everbridge/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=d435de34df074d3a&jcid=988d72635eb868b4');">questions & answers about Everbridge</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('d435de34df074d3a'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_d435de34df074d3a_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="7503f6cddae7cbd3" data-tn-component="organicJob" id="p_7503f6cddae7cbd3">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=7503f6cddae7cbd3&fccid=64e4cdd7435d8c42&vjs=3" id="jl_7503f6cddae7cbd3" onclick="setRefineByCookie([]); return rclk(this,jobmap[1],true,0);" onmousedown="return rclk(this,jobmap[1],0);" rel="noopener nofollow" target="_blank" title="Intelligence Analyst (Remote)">
<b>Intelligence</b> <b>Analyst</b> (Remote)</a>
</h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/Crowdstrike" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=7503f6cddae7cbd3&jcid=bf94d2bbe4f483e0')" rel="noopener" target="_blank">
CrowdStrike</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/Crowdstrike/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Intelligence+Analyst+%28Remote%29&fromjk=7503f6cddae7cbd3&jcid=bf94d2bbe4f483e0');" rel="noopener" target="_blank" title="Crowdstrike reviews">
<span class="ratingsContent">
2.8<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="United States" id="recJobLoc_7503f6cddae7cbd3" style="display: none"></div>
<span class="location accessible-contrast-color-location">United States</span>
<span class="remote-bullet">•</span>
<span class="remote">Remote</span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>Undergraduate degree, military training or relevant experience in cyber <b>intelligence</b>, computer science, general <b>intelligence</b> studies, security studies,…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">9 days ago</span><span class="tt_set" id="tt_set_1"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('7503f6cddae7cbd3', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('7503f6cddae7cbd3', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, '7503f6cddae7cbd3', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('7503f6cddae7cbd3');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_7503f6cddae7cbd3" onclick="changeJobState('7503f6cddae7cbd3', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_1" onclick="toggleMoreLinks('7503f6cddae7cbd3', '1'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_7503f6cddae7cbd3" style="display:none;"></div><script>if (!window['result_7503f6cddae7cbd3']) {window['result_7503f6cddae7cbd3'] = {};}window['result_7503f6cddae7cbd3']['showSource'] = false; window['result_7503f6cddae7cbd3']['source'] = "CrowdStrike"; window['result_7503f6cddae7cbd3']['loggedIn'] = false; window['result_7503f6cddae7cbd3']['showMyJobsLinks'] = false;window['result_7503f6cddae7cbd3']['undoAction'] = "unsave";window['result_7503f6cddae7cbd3']['relativeJobAge'] = "9 days ago";window['result_7503f6cddae7cbd3']['jobKey'] = "7503f6cddae7cbd3"; window['result_7503f6cddae7cbd3']['myIndeedAvailable'] = true; window['result_7503f6cddae7cbd3']['showMoreActionsLink'] = window['result_7503f6cddae7cbd3']['showMoreActionsLink'] || true; window['result_7503f6cddae7cbd3']['resultNumber'] = 1; window['result_7503f6cddae7cbd3']['jobStateChangedToSaved'] = false; window['result_7503f6cddae7cbd3']['searchState'] = "q=intelligence analyst&start=2"; window['result_7503f6cddae7cbd3']['basicPermaLink'] = "https://www.indeed.com"; window['result_7503f6cddae7cbd3']['saveJobFailed'] = false; window['result_7503f6cddae7cbd3']['removeJobFailed'] = false; window['result_7503f6cddae7cbd3']['requestPending'] = false; window['result_7503f6cddae7cbd3']['notesEnabled'] = true; window['result_7503f6cddae7cbd3']['currentPage'] = "serp"; window['result_7503f6cddae7cbd3']['sponsored'] = false;window['result_7503f6cddae7cbd3']['reportJobButtonEnabled'] = false; window['result_7503f6cddae7cbd3']['showMyJobsHired'] = false; window['result_7503f6cddae7cbd3']['showSaveForSponsored'] = false; window['result_7503f6cddae7cbd3']['showJobAge'] = true; window['result_7503f6cddae7cbd3']['showHolisticCard'] = true; window['result_7503f6cddae7cbd3']['showDislike'] = true; window['result_7503f6cddae7cbd3']['showKebab'] = true; window['result_7503f6cddae7cbd3']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_1" style="display:none;"><div class="more_actions" id="more_1"><ul><li><span class="mat">View all <a href="/q-Crowdstrike-l-United-States-jobs.html">CrowdStrike jobs in United States</a> - <a href="/l-United-States-jobs.html">United States jobs</a></span></li><li><span class="mat">Salary Search: <a href="/salaries/intelligence-analyst-Salaries,-US" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=serp-more&fromjk=7503f6cddae7cbd3&from=serp-more');">Intelligence Analyst salaries in United States</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/Crowdstrike/about" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=7503f6cddae7cbd3&from=serp-more&campaignid=serp-more&jcid=bf94d2bbe4f483e0');">CrowdStrike</a></span></li><li><span class="mat">See popular <a href="/cmp/Crowdstrike/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=7503f6cddae7cbd3&jcid=bf94d2bbe4f483e0');">questions & answers about CrowdStrike</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('7503f6cddae7cbd3'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_7503f6cddae7cbd3_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="ec5576e08ccffe63" data-tn-component="organicJob" id="p_ec5576e08ccffe63">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=ec5576e08ccffe63&fccid=e86212ad9b1d3808&vjs=3" id="jl_ec5576e08ccffe63" onclick="setRefineByCookie([]); return rclk(this,jobmap[2],true,1);" onmousedown="return rclk(this,jobmap[2],1);" rel="noopener nofollow" target="_blank" title="Intelligence Operations Specialist">
<b>Intelligence</b> Operations Specialist</a>
<span class="new">new</span></h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/Transportation-Security-Administration" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=ec5576e08ccffe63&jcid=e86212ad9b1d3808')" rel="noopener" target="_blank">
Transportation Security Administration</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/Transportation-Security-Administration/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Intelligence+Operations+Specialist&fromjk=ec5576e08ccffe63&jcid=e86212ad9b1d3808');" rel="noopener" target="_blank" title="Transportation Security Administration reviews">
<span class="ratingsContent">
3.3<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="Colorado Springs, CO" id="recJobLoc_ec5576e08ccffe63" style="display: none"></div>
<span class="location accessible-contrast-color-location">Colorado Springs, CO</span>
</div>
<div class="salarySnippet holisticSalary">
<span class="salary no-wrap">
<span class="salaryText">
$52,700 - $99,586 a year</span>
</span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>Advanced technical knowledge of <b>intelligence</b> collection, analysis, evaluation, interpretation and operations to plan and accomplish <b>intelligence</b> assignments and…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">4 days ago</span><span class="tt_set" id="tt_set_2"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('ec5576e08ccffe63', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('ec5576e08ccffe63', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, 'ec5576e08ccffe63', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('ec5576e08ccffe63');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_ec5576e08ccffe63" onclick="changeJobState('ec5576e08ccffe63', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_2" onclick="toggleMoreLinks('ec5576e08ccffe63', '2'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_ec5576e08ccffe63" style="display:none;"></div><script>if (!window['result_ec5576e08ccffe63']) {window['result_ec5576e08ccffe63'] = {};}window['result_ec5576e08ccffe63']['showSource'] = false; window['result_ec5576e08ccffe63']['source'] = "Transportation Security Administration"; window['result_ec5576e08ccffe63']['loggedIn'] = false; window['result_ec5576e08ccffe63']['showMyJobsLinks'] = false;window['result_ec5576e08ccffe63']['undoAction'] = "unsave";window['result_ec5576e08ccffe63']['relativeJobAge'] = "4 days ago";window['result_ec5576e08ccffe63']['jobKey'] = "ec5576e08ccffe63"; window['result_ec5576e08ccffe63']['myIndeedAvailable'] = true; window['result_ec5576e08ccffe63']['showMoreActionsLink'] = window['result_ec5576e08ccffe63']['showMoreActionsLink'] || true; window['result_ec5576e08ccffe63']['resultNumber'] = 2; window['result_ec5576e08ccffe63']['jobStateChangedToSaved'] = false; window['result_ec5576e08ccffe63']['searchState'] = "q=intelligence analyst&start=2"; window['result_ec5576e08ccffe63']['basicPermaLink'] = "https://www.indeed.com"; window['result_ec5576e08ccffe63']['saveJobFailed'] = false; window['result_ec5576e08ccffe63']['removeJobFailed'] = false; window['result_ec5576e08ccffe63']['requestPending'] = false; window['result_ec5576e08ccffe63']['notesEnabled'] = true; window['result_ec5576e08ccffe63']['currentPage'] = "serp"; window['result_ec5576e08ccffe63']['sponsored'] = false;window['result_ec5576e08ccffe63']['reportJobButtonEnabled'] = false; window['result_ec5576e08ccffe63']['showMyJobsHired'] = false; window['result_ec5576e08ccffe63']['showSaveForSponsored'] = false; window['result_ec5576e08ccffe63']['showJobAge'] = true; window['result_ec5576e08ccffe63']['showHolisticCard'] = true; window['result_ec5576e08ccffe63']['showDislike'] = true; window['result_ec5576e08ccffe63']['showKebab'] = true; window['result_ec5576e08ccffe63']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_2" style="display:none;"><div class="more_actions" id="more_2"><ul><li><span class="mat">View all <a href="/q-Transportation-Security-Administration-l-Colorado-Springs,-CO-jobs.html">Transportation Security Administration jobs in Colorado Springs, CO</a> - <a href="/l-Colorado-Springs,-CO-jobs.html">Colorado Springs jobs</a></span></li><li><span class="mat">Salary Search: <a href="/salaries/intelligence-specialist-Salaries,-Colorado-Springs-CO" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=serp-more&fromjk=ec5576e08ccffe63&from=serp-more');">Intelligence Specialist salaries in Colorado Springs, CO</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/Transportation-Security-Administration/about" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=ec5576e08ccffe63&from=serp-more&campaignid=serp-more&jcid=e86212ad9b1d3808');">Transportation Security Administration</a></span></li><li><span class="mat">See popular <a href="/cmp/Transportation-Security-Administration/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=ec5576e08ccffe63&jcid=e86212ad9b1d3808');">questions & answers about Transportation Security Administration</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('ec5576e08ccffe63'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_ec5576e08ccffe63_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="21c104783241fc55" data-tn-component="organicJob" id="p_21c104783241fc55">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=21c104783241fc55&fccid=e9870e3159e9c6ac&vjs=3" id="jl_21c104783241fc55" onclick="setRefineByCookie([]); return rclk(this,jobmap[3],true,1);" onmousedown="return rclk(this,jobmap[3],1);" rel="noopener nofollow" target="_blank" title="Economic Analyst">
Economic <b>Analyst</b></a>
</h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/Central-Intelligence-Agency" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=21c104783241fc55&jcid=e9870e3159e9c6ac')" rel="noopener" target="_blank">
Central <b>Intelligence</b> Agency</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/Central-Intelligence-Agency/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Economic+Analyst&fromjk=21c104783241fc55&jcid=e9870e3159e9c6ac');" rel="noopener" target="_blank" title="Central Intelligence Agency reviews">
<span class="ratingsContent">
4.3<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="Washington, DC" id="recJobLoc_21c104783241fc55" style="display: none"></div>
<span class="location accessible-contrast-color-location">Washington, DC</span>
</div>
<div class="salarySnippet holisticSalary">
<span class="salary no-wrap">
<span class="salaryText">
$57,495 - $157,709 a year</span>
</span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li style="margin-bottom:0px;">Bachelor's or Master's degree in one of the following fields or related studies:</li>
<li>The DA helps provide timely, accurate and objective all-source intelligence…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">30+ days ago</span><span class="tt_set" id="tt_set_3"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('21c104783241fc55', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('21c104783241fc55', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, '21c104783241fc55', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('21c104783241fc55');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_21c104783241fc55" onclick="changeJobState('21c104783241fc55', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_3" onclick="toggleMoreLinks('21c104783241fc55', '3'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_21c104783241fc55" style="display:none;"></div><script>if (!window['result_21c104783241fc55']) {window['result_21c104783241fc55'] = {};}window['result_21c104783241fc55']['showSource'] = false; window['result_21c104783241fc55']['source'] = "Central Intelligence Agency"; window['result_21c104783241fc55']['loggedIn'] = false; window['result_21c104783241fc55']['showMyJobsLinks'] = false;window['result_21c104783241fc55']['undoAction'] = "unsave";window['result_21c104783241fc55']['relativeJobAge'] = "30+ days ago";window['result_21c104783241fc55']['jobKey'] = "21c104783241fc55"; window['result_21c104783241fc55']['myIndeedAvailable'] = true; window['result_21c104783241fc55']['showMoreActionsLink'] = window['result_21c104783241fc55']['showMoreActionsLink'] || true; window['result_21c104783241fc55']['resultNumber'] = 3; window['result_21c104783241fc55']['jobStateChangedToSaved'] = false; window['result_21c104783241fc55']['searchState'] = "q=intelligence analyst&start=2"; window['result_21c104783241fc55']['basicPermaLink'] = "https://www.indeed.com"; window['result_21c104783241fc55']['saveJobFailed'] = false; window['result_21c104783241fc55']['removeJobFailed'] = false; window['result_21c104783241fc55']['requestPending'] = false; window['result_21c104783241fc55']['notesEnabled'] = true; window['result_21c104783241fc55']['currentPage'] = "serp"; window['result_21c104783241fc55']['sponsored'] = false;window['result_21c104783241fc55']['reportJobButtonEnabled'] = false; window['result_21c104783241fc55']['showMyJobsHired'] = false; window['result_21c104783241fc55']['showSaveForSponsored'] = false; window['result_21c104783241fc55']['showJobAge'] = true; window['result_21c104783241fc55']['showHolisticCard'] = true; window['result_21c104783241fc55']['showDislike'] = true; window['result_21c104783241fc55']['showKebab'] = true; window['result_21c104783241fc55']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_3" style="display:none;"><div class="more_actions" id="more_3"><ul><li><span class="mat">View all <a href="/q-Central-Intelligence-Agency-l-Washington,-DC-jobs.html">Central Intelligence Agency jobs in Washington, DC</a> - <a href="/l-Washington,-DC-jobs.html">Washington jobs</a></span></li><li><span class="mat">Salary Search: <a href="/salaries/economic-analyst-Salaries,-Washington-DC" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=serp-more&fromjk=21c104783241fc55&from=serp-more');">Economic Analyst salaries in Washington, DC</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/Central-Intelligence-Agency" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=21c104783241fc55&from=serp-more&campaignid=serp-more&jcid=e9870e3159e9c6ac');">Central Intelligence Agency</a></span></li><li><span class="mat">See popular <a href="/cmp/Central-Intelligence-Agency/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=21c104783241fc55&jcid=e9870e3159e9c6ac');">questions & answers about Central Intelligence Agency</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('21c104783241fc55'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_21c104783241fc55_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="c9c6ded3f16248bb" data-tn-component="organicJob" id="p_c9c6ded3f16248bb">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=c9c6ded3f16248bb&fccid=78f9872912308a55&vjs=3" id="jl_c9c6ded3f16248bb" onclick="setRefineByCookie([]); return rclk(this,jobmap[4],true,1);" onmousedown="return rclk(this,jobmap[4],1);" rel="noopener nofollow" target="_blank" title="Senior Data analyst (REMOTE)">
Senior Data <b>analyst</b> (REMOTE)</a>
<span class="new">new</span></h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/Elevate-Textiles" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=c9c6ded3f16248bb&jcid=6d39c7bbfa2a8d5c')" rel="noopener" target="_blank">
Elevate Textiles</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/Elevate-Textiles/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Senior+Data+analyst+%28REMOTE%29&fromjk=c9c6ded3f16248bb&jcid=6d39c7bbfa2a8d5c');" rel="noopener" target="_blank" title="Elevate Textiles reviews">
<span class="ratingsContent">
3.0<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="United States" id="recJobLoc_c9c6ded3f16248bb" style="display: none"></div>
<span class="location accessible-contrast-color-location">United States</span>
<span>
<a class="more_loc" href="/addlLoc/redirect?tk=1eks6bfsdp7om800&jk=c9c6ded3f16248bb&dest=%2Fjobs%3Fq%3Dintelligence%2Banalyst%26rbt%3DSenior%2BData%2Banalyst%2B%2528REMOTE%2529%26rbc%3DElevate%2BTextiles%26jtid%3D8104b92ab7667a9a%26jcid%3D6d39c7bbfa2a8d5c%26grp%3Dtcl" onmousedown="ptk('addlloc');" rel="nofollow">
+3 locations</a>
</span>
<span class="remote-bullet">•</span>
<span class="remote">Remote</span>
</div>
<div class="salarySnippet holisticSalary">
<span class="salary no-wrap">
<span class="salaryText">
$29 - $38 an hour</span>
</span>
</div>
<table class="jobCardShelfContainer" role="presentation"><tr class="jobCardShelf"><td class="jobCardShelfItem indeedApply"><span class="jobCardShelfIcon"><svg fill="none" height="16" viewbox="0 0 20 20" width="16"><rect fill="#FF5A1F" height="20" rx="10" width="20"></rect><path clip-rule="evenodd" d="M15.3125 4.0625L10.8125 15.3125L7.99999 11.375L15.3125 4.0625ZM7.604 12.7576L6.875 15.3125L8.567 14.1054L7.604 12.7576ZM7.20463 10.5796L12.419 5.36525L4.0625 9.125L6.9875 10.7968L7.20463 10.5796Z" fill="white" fill-rule="evenodd"></path></svg></span><span class="iaLabel iaIconActive">Easily apply</span></td></tr></table><div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li style="margin-bottom:0px;">Knowledge of statistical tools and business reporting.</li>
<li>You will analyze data to understand business and market trends in order to increase company revenue and…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">Today</span><span class="tt_set" id="tt_set_4"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('c9c6ded3f16248bb', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('c9c6ded3f16248bb', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, 'c9c6ded3f16248bb', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('c9c6ded3f16248bb');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_c9c6ded3f16248bb" onclick="changeJobState('c9c6ded3f16248bb', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_4" onclick="toggleMoreLinks('c9c6ded3f16248bb', '4'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_c9c6ded3f16248bb" style="display:none;"></div><script>if (!window['result_c9c6ded3f16248bb']) {window['result_c9c6ded3f16248bb'] = {};}window['result_c9c6ded3f16248bb']['showSource'] = false; window['result_c9c6ded3f16248bb']['source'] = "Simply Hired"; window['result_c9c6ded3f16248bb']['loggedIn'] = false; window['result_c9c6ded3f16248bb']['showMyJobsLinks'] = false;window['result_c9c6ded3f16248bb']['undoAction'] = "unsave";window['result_c9c6ded3f16248bb']['relativeJobAge'] = "Today";window['result_c9c6ded3f16248bb']['jobKey'] = "c9c6ded3f16248bb"; window['result_c9c6ded3f16248bb']['myIndeedAvailable'] = true; window['result_c9c6ded3f16248bb']['showMoreActionsLink'] = window['result_c9c6ded3f16248bb']['showMoreActionsLink'] || true; window['result_c9c6ded3f16248bb']['resultNumber'] = 4; window['result_c9c6ded3f16248bb']['jobStateChangedToSaved'] = false; window['result_c9c6ded3f16248bb']['searchState'] = "q=intelligence analyst&start=2"; window['result_c9c6ded3f16248bb']['basicPermaLink'] = "https://www.indeed.com"; window['result_c9c6ded3f16248bb']['saveJobFailed'] = false; window['result_c9c6ded3f16248bb']['removeJobFailed'] = false; window['result_c9c6ded3f16248bb']['requestPending'] = false; window['result_c9c6ded3f16248bb']['notesEnabled'] = true; window['result_c9c6ded3f16248bb']['currentPage'] = "serp"; window['result_c9c6ded3f16248bb']['sponsored'] = false;window['result_c9c6ded3f16248bb']['reportJobButtonEnabled'] = false; window['result_c9c6ded3f16248bb']['showMyJobsHired'] = false; window['result_c9c6ded3f16248bb']['showSaveForSponsored'] = false; window['result_c9c6ded3f16248bb']['showJobAge'] = true; window['result_c9c6ded3f16248bb']['showHolisticCard'] = true; window['result_c9c6ded3f16248bb']['showDislike'] = true; window['result_c9c6ded3f16248bb']['showKebab'] = true; window['result_c9c6ded3f16248bb']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_4" style="display:none;"><div class="more_actions" id="more_4"><ul><li><span class="mat">View all <a href="/q-Elevate-Textiles-l-United-States-jobs.html">Elevate Textiles jobs in United States</a> - <a href="/l-United-States-jobs.html">United States jobs</a></span></li><li><span class="mat">Salary Search: <a href="/salaries/senior-data-analyst-Salaries,-US" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=serp-more&fromjk=c9c6ded3f16248bb&from=serp-more');">Senior Data Analyst salaries in United States</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/Elevate-Textiles" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=c9c6ded3f16248bb&from=serp-more&campaignid=serp-more&jcid=6d39c7bbfa2a8d5c');">Elevate Textiles</a></span></li><li><span class="mat">See popular <a href="/cmp/Elevate-Textiles/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=c9c6ded3f16248bb&jcid=6d39c7bbfa2a8d5c');">questions & answers about Elevate Textiles</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('c9c6ded3f16248bb'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_c9c6ded3f16248bb_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="dd0d5c9b7aef7be0" data-tn-component="organicJob" id="p_dd0d5c9b7aef7be0">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=dd0d5c9b7aef7be0&fccid=6b1185eac0308107&vjs=3" id="jl_dd0d5c9b7aef7be0" onclick="setRefineByCookie([]); return rclk(this,jobmap[5],true,1);" onmousedown="return rclk(this,jobmap[5],1);" rel="noopener nofollow" target="_blank" title="Criminal Intelligence Analyst">
Criminal <b>Intelligence</b> <b>Analyst</b></a>
</h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/Hillsborough-County-Sheriffs-Office" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=dd0d5c9b7aef7be0&jcid=b83c82431ba24b7b')" rel="noopener" target="_blank">
Hillsborough County Sheriff's Office</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/Hillsborough-County-Sheriffs-Office/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Criminal+Intelligence+Analyst&fromjk=dd0d5c9b7aef7be0&jcid=b83c82431ba24b7b');" rel="noopener" target="_blank" title="Hillsborough County Sheriff's Office reviews">
<span class="ratingsContent">
4.0<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="Tampa, FL" id="recJobLoc_dd0d5c9b7aef7be0" style="display: none"></div>
<span class="location accessible-contrast-color-location">Tampa, FL</span>
</div>
<div class="salarySnippet holisticSalary">
<span class="salary no-wrap">
<span class="salaryText">
Up to $77,542 a year</span>
</span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>Experience with a law enforcement or military agency performing technical tasks, such as: identifying, extracting, collating, and analyzing data from databases…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">30+ days ago</span><span class="tt_set" id="tt_set_5"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('dd0d5c9b7aef7be0', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('dd0d5c9b7aef7be0', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, 'dd0d5c9b7aef7be0', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('dd0d5c9b7aef7be0');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_dd0d5c9b7aef7be0" onclick="changeJobState('dd0d5c9b7aef7be0', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_5" onclick="toggleMoreLinks('dd0d5c9b7aef7be0', '5'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_dd0d5c9b7aef7be0" style="display:none;"></div><script>if (!window['result_dd0d5c9b7aef7be0']) {window['result_dd0d5c9b7aef7be0'] = {};}window['result_dd0d5c9b7aef7be0']['showSource'] = false; window['result_dd0d5c9b7aef7be0']['source'] = "Hillsborough County Sheriff\x27s Office"; window['result_dd0d5c9b7aef7be0']['loggedIn'] = false; window['result_dd0d5c9b7aef7be0']['showMyJobsLinks'] = false;window['result_dd0d5c9b7aef7be0']['undoAction'] = "unsave";window['result_dd0d5c9b7aef7be0']['relativeJobAge'] = "30+ days ago";window['result_dd0d5c9b7aef7be0']['jobKey'] = "dd0d5c9b7aef7be0"; window['result_dd0d5c9b7aef7be0']['myIndeedAvailable'] = true; window['result_dd0d5c9b7aef7be0']['showMoreActionsLink'] = window['result_dd0d5c9b7aef7be0']['showMoreActionsLink'] || true; window['result_dd0d5c9b7aef7be0']['resultNumber'] = 5; window['result_dd0d5c9b7aef7be0']['jobStateChangedToSaved'] = false; window['result_dd0d5c9b7aef7be0']['searchState'] = "q=intelligence analyst&start=2"; window['result_dd0d5c9b7aef7be0']['basicPermaLink'] = "https://www.indeed.com"; window['result_dd0d5c9b7aef7be0']['saveJobFailed'] = false; window['result_dd0d5c9b7aef7be0']['removeJobFailed'] = false; window['result_dd0d5c9b7aef7be0']['requestPending'] = false; window['result_dd0d5c9b7aef7be0']['notesEnabled'] = true; window['result_dd0d5c9b7aef7be0']['currentPage'] = "serp"; window['result_dd0d5c9b7aef7be0']['sponsored'] = false;window['result_dd0d5c9b7aef7be0']['reportJobButtonEnabled'] = false; window['result_dd0d5c9b7aef7be0']['showMyJobsHired'] = false; window['result_dd0d5c9b7aef7be0']['showSaveForSponsored'] = false; window['result_dd0d5c9b7aef7be0']['showJobAge'] = true; window['result_dd0d5c9b7aef7be0']['showHolisticCard'] = true; window['result_dd0d5c9b7aef7be0']['showDislike'] = true; window['result_dd0d5c9b7aef7be0']['showKebab'] = true; window['result_dd0d5c9b7aef7be0']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_5" style="display:none;"><div class="more_actions" id="more_5"><ul><li><span class="mat">View all <a href="/jobs?q=Hillsborough+County+Sheriff%27s+Office&l=Tampa,+FL">Hillsborough County Sheriff's Office jobs in Tampa, FL</a> - <a href="/l-Tampa,-FL-jobs.html">Tampa jobs</a></span></li><li><span class="mat">Salary Search: <a href="/salaries/intelligence-analyst-Salaries,-Tampa-FL" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=serp-more&fromjk=dd0d5c9b7aef7be0&from=serp-more');">Intelligence Analyst salaries in Tampa, FL</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/Hillsborough-County-Sheriffs-Office/about" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=dd0d5c9b7aef7be0&from=serp-more&campaignid=serp-more&jcid=b83c82431ba24b7b');">Hillsborough County Sheriff's Office</a></span></li><li><span class="mat">See popular <a href="/cmp/Hillsborough-County-Sheriffs-Office/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=dd0d5c9b7aef7be0&jcid=b83c82431ba24b7b');">questions & answers about Hillsborough County Sheriff's Office</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('dd0d5c9b7aef7be0'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_dd0d5c9b7aef7be0_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="8486c2ddb33ff713" data-tn-component="organicJob" id="p_8486c2ddb33ff713">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=8486c2ddb33ff713&fccid=e8f18ca6180ec8da&vjs=3" id="jl_8486c2ddb33ff713" onclick="setRefineByCookie([]); return rclk(this,jobmap[6],true,1);" onmousedown="return rclk(this,jobmap[6],1);" rel="noopener nofollow" target="_blank" title="Linguist/Language Analyst (Russian/Chinese)">
Linguist/Language <b>Analyst</b> (Russian/Chinese)</a>
</h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/National-Security-Agency" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=8486c2ddb33ff713&jcid=e8f18ca6180ec8da')" rel="noopener" target="_blank">
National Security Agency</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/National-Security-Agency/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Linguist%5C%2FLanguage+Analyst+%28Russian%5C%2FChinese%29&fromjk=8486c2ddb33ff713&jcid=e8f18ca6180ec8da');" rel="noopener" target="_blank" title="National Security Agency reviews">
<span class="ratingsContent">
4.2<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="United States" id="recJobLoc_8486c2ddb33ff713" style="display: none"></div>
<span class="location accessible-contrast-color-location">United States</span>
</div>
<div class="salarySnippet holisticSalary">
<span class="salary no-wrap">
<span class="salaryText">
$64,009 - $99,741 a year</span>
</span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li style="margin-bottom:0px;">Prior military without a degree but with language analysis cryptologic experience will be considered.</li>
<li>Degree in Language, Regional/Area Studies, International…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">15 days ago</span><span class="tt_set" id="tt_set_6"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('8486c2ddb33ff713', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('8486c2ddb33ff713', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, '8486c2ddb33ff713', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('8486c2ddb33ff713');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_8486c2ddb33ff713" onclick="changeJobState('8486c2ddb33ff713', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_6" onclick="toggleMoreLinks('8486c2ddb33ff713', '6'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_8486c2ddb33ff713" style="display:none;"></div><script>if (!window['result_8486c2ddb33ff713']) {window['result_8486c2ddb33ff713'] = {};}window['result_8486c2ddb33ff713']['showSource'] = false; window['result_8486c2ddb33ff713']['source'] = "National Security Agency"; window['result_8486c2ddb33ff713']['loggedIn'] = false; window['result_8486c2ddb33ff713']['showMyJobsLinks'] = false;window['result_8486c2ddb33ff713']['undoAction'] = "unsave";window['result_8486c2ddb33ff713']['relativeJobAge'] = "15 days ago";window['result_8486c2ddb33ff713']['jobKey'] = "8486c2ddb33ff713"; window['result_8486c2ddb33ff713']['myIndeedAvailable'] = true; window['result_8486c2ddb33ff713']['showMoreActionsLink'] = window['result_8486c2ddb33ff713']['showMoreActionsLink'] || true; window['result_8486c2ddb33ff713']['resultNumber'] = 6; window['result_8486c2ddb33ff713']['jobStateChangedToSaved'] = false; window['result_8486c2ddb33ff713']['searchState'] = "q=intelligence analyst&start=2"; window['result_8486c2ddb33ff713']['basicPermaLink'] = "https://www.indeed.com"; window['result_8486c2ddb33ff713']['saveJobFailed'] = false; window['result_8486c2ddb33ff713']['removeJobFailed'] = false; window['result_8486c2ddb33ff713']['requestPending'] = false; window['result_8486c2ddb33ff713']['notesEnabled'] = true; window['result_8486c2ddb33ff713']['currentPage'] = "serp"; window['result_8486c2ddb33ff713']['sponsored'] = false;window['result_8486c2ddb33ff713']['reportJobButtonEnabled'] = false; window['result_8486c2ddb33ff713']['showMyJobsHired'] = false; window['result_8486c2ddb33ff713']['showSaveForSponsored'] = false; window['result_8486c2ddb33ff713']['showJobAge'] = true; window['result_8486c2ddb33ff713']['showHolisticCard'] = true; window['result_8486c2ddb33ff713']['showDislike'] = true; window['result_8486c2ddb33ff713']['showKebab'] = true; window['result_8486c2ddb33ff713']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_6" style="display:none;"><div class="more_actions" id="more_6"><ul><li><span class="mat">View all <a href="/q-National-Security-Agency-l-United-States-jobs.html">National Security Agency jobs in United States</a> - <a href="/l-United-States-jobs.html">United States jobs</a></span></li><li><span class="mat">Salary Search: <a href="/salaries/linguist-Salaries,-US" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=serp-more&fromjk=8486c2ddb33ff713&from=serp-more');">Linguist salaries in United States</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/National-Security-Agency/about" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=8486c2ddb33ff713&from=serp-more&campaignid=serp-more&jcid=e8f18ca6180ec8da');">National Security Agency</a></span></li><li><span class="mat">See popular <a href="/cmp/National-Security-Agency/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=8486c2ddb33ff713&jcid=e8f18ca6180ec8da');">questions & answers about National Security Agency</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('8486c2ddb33ff713'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_8486c2ddb33ff713_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="7027ddae82aea145" data-tn-component="organicJob" id="p_7027ddae82aea145">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=7027ddae82aea145&fccid=f1b5b95bc792ac3a&vjs=3" id="jl_7027ddae82aea145" onclick="setRefineByCookie([]); return rclk(this,jobmap[7],true,0);" onmousedown="return rclk(this,jobmap[7],0);" rel="noopener nofollow" target="_blank" title="Intelligence Analyst">
<b>Intelligence</b> <b>Analyst</b></a>
<span class="new">new</span></h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/Halfaker-and-Associates" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=7027ddae82aea145&jcid=f1b5b95bc792ac3a')" rel="noopener" target="_blank">
Halfaker and Associates</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/Halfaker-and-Associates/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Intelligence+Analyst&fromjk=7027ddae82aea145&jcid=f1b5b95bc792ac3a');" rel="noopener" target="_blank" title="Halfaker and Associates reviews">
<span class="ratingsContent">
3.8<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="Washington, DC" id="recJobLoc_7027ddae82aea145" style="display: none"></div>
<span class="location accessible-contrast-color-location">Washington, DC</span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>Must also demonstrate the ability to conduct research, analysis, and technical writing skills and be able to perform triage on questions, issues, or events…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">3 days ago</span><span class="tt_set" id="tt_set_7"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('7027ddae82aea145', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('7027ddae82aea145', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, '7027ddae82aea145', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('7027ddae82aea145');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_7027ddae82aea145" onclick="changeJobState('7027ddae82aea145', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_7" onclick="toggleMoreLinks('7027ddae82aea145', '7'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_7027ddae82aea145" style="display:none;"></div><script>if (!window['result_7027ddae82aea145']) {window['result_7027ddae82aea145'] = {};}window['result_7027ddae82aea145']['showSource'] = false; window['result_7027ddae82aea145']['source'] = "Halfaker and Associates"; window['result_7027ddae82aea145']['loggedIn'] = false; window['result_7027ddae82aea145']['showMyJobsLinks'] = false;window['result_7027ddae82aea145']['undoAction'] = "unsave";window['result_7027ddae82aea145']['relativeJobAge'] = "3 days ago";window['result_7027ddae82aea145']['jobKey'] = "7027ddae82aea145"; window['result_7027ddae82aea145']['myIndeedAvailable'] = true; window['result_7027ddae82aea145']['showMoreActionsLink'] = window['result_7027ddae82aea145']['showMoreActionsLink'] || true; window['result_7027ddae82aea145']['resultNumber'] = 7; window['result_7027ddae82aea145']['jobStateChangedToSaved'] = false; window['result_7027ddae82aea145']['searchState'] = "q=intelligence analyst&start=2"; window['result_7027ddae82aea145']['basicPermaLink'] = "https://www.indeed.com"; window['result_7027ddae82aea145']['saveJobFailed'] = false; window['result_7027ddae82aea145']['removeJobFailed'] = false; window['result_7027ddae82aea145']['requestPending'] = false; window['result_7027ddae82aea145']['notesEnabled'] = true; window['result_7027ddae82aea145']['currentPage'] = "serp"; window['result_7027ddae82aea145']['sponsored'] = false;window['result_7027ddae82aea145']['reportJobButtonEnabled'] = false; window['result_7027ddae82aea145']['showMyJobsHired'] = false; window['result_7027ddae82aea145']['showSaveForSponsored'] = false; window['result_7027ddae82aea145']['showJobAge'] = true; window['result_7027ddae82aea145']['showHolisticCard'] = true; window['result_7027ddae82aea145']['showDislike'] = true; window['result_7027ddae82aea145']['showKebab'] = true; window['result_7027ddae82aea145']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_7" style="display:none;"><div class="more_actions" id="more_7"><ul><li><span class="mat">View all <a href="/q-Halfaker-Associates-l-Washington,-DC-jobs.html">Halfaker and Associates jobs in Washington, DC</a> - <a href="/l-Washington,-DC-jobs.html">Washington jobs</a></span></li><li><span class="mat">Salary Search: <a href="/salaries/intelligence-analyst-Salaries,-Washington-DC" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=serp-more&fromjk=7027ddae82aea145&from=serp-more');">Intelligence Analyst salaries in Washington, DC</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/Halfaker-and-Associates" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=7027ddae82aea145&from=serp-more&campaignid=serp-more&jcid=f1b5b95bc792ac3a');">Halfaker and Associates</a></span></li><li><span class="mat">See popular <a href="/cmp/Halfaker-and-Associates/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=7027ddae82aea145&jcid=f1b5b95bc792ac3a');">questions & answers about Halfaker and Associates</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('7027ddae82aea145'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_7027ddae82aea145_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="15c98f42a69b128f" data-tn-component="organicJob" id="p_15c98f42a69b128f">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=15c98f42a69b128f&fccid=95fcff06e59f4033&vjs=3" id="jl_15c98f42a69b128f" onclick="setRefineByCookie([]); return rclk(this,jobmap[8],true,0);" onmousedown="return rclk(this,jobmap[8],0);" rel="noopener nofollow" target="_blank" title="Virtual Intelligence Analyst">
Virtual <b>Intelligence</b> <b>Analyst</b></a>
</h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/G4s" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=15c98f42a69b128f&jcid=95fcff06e59f4033')" rel="noopener" target="_blank">
G4S</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/G4s/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Virtual+Intelligence+Analyst&fromjk=15c98f42a69b128f&jcid=95fcff06e59f4033');" rel="noopener" target="_blank">
<span class="ratingsContent">
3.4<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="Jupiter, FL" id="recJobLoc_15c98f42a69b128f" style="display: none"></div>
<span class="location accessible-contrast-color-location">Jupiter, FL 33458</span>
<span class="remote-bullet">•</span>
<span class="remote">Remote</span>
</div>
<table class="jobCardShelfContainer" role="presentation"><tr class="jobCardShelf"><td class="jobCardShelfItem indeedApply"><span class="jobCardShelfIcon"><svg fill="none" height="16" viewbox="0 0 20 20" width="16"><rect fill="#FF5A1F" height="20" rx="10" width="20"></rect><path clip-rule="evenodd" d="M15.3125 4.0625L10.8125 15.3125L7.99999 11.375L15.3125 4.0625ZM7.604 12.7576L6.875 15.3125L8.567 14.1054L7.604 12.7576ZM7.20463 10.5796L12.419 5.36525L4.0625 9.125L6.9875 10.7968L7.20463 10.5796Z" fill="white" fill-rule="evenodd"></path></svg></span><span class="iaLabel iaIconActive">Easily apply</span></td></tr></table><div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>Experience learning and quickly becoming proficient maintaining databases and utilizing software applications, such as <b>intelligence</b> analysis and data collection…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">15 days ago</span><span class="tt_set" id="tt_set_8"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('15c98f42a69b128f', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('15c98f42a69b128f', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, '15c98f42a69b128f', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('15c98f42a69b128f');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_15c98f42a69b128f" onclick="changeJobState('15c98f42a69b128f', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_8" onclick="toggleMoreLinks('15c98f42a69b128f', '8'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_15c98f42a69b128f" style="display:none;"></div><script>if (!window['result_15c98f42a69b128f']) {window['result_15c98f42a69b128f'] = {};}window['result_15c98f42a69b128f']['showSource'] = false; window['result_15c98f42a69b128f']['source'] = "G4S"; window['result_15c98f42a69b128f']['loggedIn'] = false; window['result_15c98f42a69b128f']['showMyJobsLinks'] = false;window['result_15c98f42a69b128f']['undoAction'] = "unsave";window['result_15c98f42a69b128f']['relativeJobAge'] = "15 days ago";window['result_15c98f42a69b128f']['jobKey'] = "15c98f42a69b128f"; window['result_15c98f42a69b128f']['myIndeedAvailable'] = true; window['result_15c98f42a69b128f']['showMoreActionsLink'] = window['result_15c98f42a69b128f']['showMoreActionsLink'] || true; window['result_15c98f42a69b128f']['resultNumber'] = 8; window['result_15c98f42a69b128f']['jobStateChangedToSaved'] = false; window['result_15c98f42a69b128f']['searchState'] = "q=intelligence analyst&start=2"; window['result_15c98f42a69b128f']['basicPermaLink'] = "https://www.indeed.com"; window['result_15c98f42a69b128f']['saveJobFailed'] = false; window['result_15c98f42a69b128f']['removeJobFailed'] = false; window['result_15c98f42a69b128f']['requestPending'] = false; window['result_15c98f42a69b128f']['notesEnabled'] = true; window['result_15c98f42a69b128f']['currentPage'] = "serp"; window['result_15c98f42a69b128f']['sponsored'] = false;window['result_15c98f42a69b128f']['reportJobButtonEnabled'] = false; window['result_15c98f42a69b128f']['showMyJobsHired'] = false; window['result_15c98f42a69b128f']['showSaveForSponsored'] = false; window['result_15c98f42a69b128f']['showJobAge'] = true; window['result_15c98f42a69b128f']['showHolisticCard'] = true; window['result_15c98f42a69b128f']['showDislike'] = true; window['result_15c98f42a69b128f']['showKebab'] = true; window['result_15c98f42a69b128f']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_8" style="display:none;"><div class="more_actions" id="more_8"><ul><li><span class="mat">View all <a href="/q-G4s-l-Jupiter,-FL-jobs.html">G4S jobs in Jupiter, FL</a> - <a href="/l-Jupiter,-FL-jobs.html">Jupiter jobs</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/G4s/about" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=15c98f42a69b128f&from=serp-more&campaignid=serp-more&jcid=95fcff06e59f4033');">G4S</a></span></li><li><span class="mat">See popular <a href="/cmp/G4s/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=15c98f42a69b128f&jcid=95fcff06e59f4033');">questions & answers about G4S</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('15c98f42a69b128f'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_15c98f42a69b128f_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="237140e37987cb7c" data-tn-component="organicJob" id="p_237140e37987cb7c">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=237140e37987cb7c&fccid=05061f170b6114b6&vjs=3" id="jl_237140e37987cb7c" onclick="setRefineByCookie([]); return rclk(this,jobmap[9],true,0);" onmousedown="return rclk(this,jobmap[9],0);" rel="noopener nofollow" target="_blank" title="Jr. Intelligence Analyst - Top Secret w/Polygraph-">
Jr. <b>Intelligence</b> <b>Analyst</b> - Top Secret w/Polygraph-</a>
</h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/Counter-Threat-Solutions" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=237140e37987cb7c&jcid=67862a95cf1f198e')" rel="noopener" target="_blank">
Counter Threat Solutions</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/Counter-Threat-Solutions/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Jr.+Intelligence+Analyst+-+Top+Secret+w%5C%2FPolygraph-&fromjk=237140e37987cb7c&jcid=67862a95cf1f198e');" rel="noopener" target="_blank" title="Counter Threat Solutions reviews">
<span class="ratingsContent">
5.0<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="Warrenton, VA" id="recJobLoc_237140e37987cb7c" style="display: none"></div>
<span class="location accessible-contrast-color-location">Warrenton, VA 20186</span>
</div>
<table class="jobCardShelfContainer" role="presentation"><tr class="jobCardShelf"><td class="jobCardShelfItem indeedApply"><span class="jobCardShelfIcon"><svg fill="none" height="16" viewbox="0 0 20 20" width="16"><rect fill="#FF5A1F" height="20" rx="10" width="20"></rect><path clip-rule="evenodd" d="M15.3125 4.0625L10.8125 15.3125L7.99999 11.375L15.3125 4.0625ZM7.604 12.7576L6.875 15.3125L8.567 14.1054L7.604 12.7576ZM7.20463 10.5796L12.419 5.36525L4.0625 9.125L6.9875 10.7968L7.20463 10.5796Z" fill="white" fill-rule="evenodd"></path></svg></span><span class="iaLabel iaIconActive">Easily apply</span></td></tr></table><div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>Our team brings exceptional understanding of our client's challenges and a wide variety of financial, business, management and technical services to our…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">23 days ago</span><span class="tt_set" id="tt_set_9"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('237140e37987cb7c', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('237140e37987cb7c', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, '237140e37987cb7c', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('237140e37987cb7c');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_237140e37987cb7c" onclick="changeJobState('237140e37987cb7c', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_9" onclick="toggleMoreLinks('237140e37987cb7c', '9'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_237140e37987cb7c" style="display:none;"></div><script>if (!window['result_237140e37987cb7c']) {window['result_237140e37987cb7c'] = {};}window['result_237140e37987cb7c']['showSource'] = false; window['result_237140e37987cb7c']['source'] = "Counter Threat Solutions"; window['result_237140e37987cb7c']['loggedIn'] = false; window['result_237140e37987cb7c']['showMyJobsLinks'] = false;window['result_237140e37987cb7c']['undoAction'] = "unsave";window['result_237140e37987cb7c']['relativeJobAge'] = "23 days ago";window['result_237140e37987cb7c']['jobKey'] = "237140e37987cb7c"; window['result_237140e37987cb7c']['myIndeedAvailable'] = true; window['result_237140e37987cb7c']['showMoreActionsLink'] = window['result_237140e37987cb7c']['showMoreActionsLink'] || true; window['result_237140e37987cb7c']['resultNumber'] = 9; window['result_237140e37987cb7c']['jobStateChangedToSaved'] = false; window['result_237140e37987cb7c']['searchState'] = "q=intelligence analyst&start=2"; window['result_237140e37987cb7c']['basicPermaLink'] = "https://www.indeed.com"; window['result_237140e37987cb7c']['saveJobFailed'] = false; window['result_237140e37987cb7c']['removeJobFailed'] = false; window['result_237140e37987cb7c']['requestPending'] = false; window['result_237140e37987cb7c']['notesEnabled'] = true; window['result_237140e37987cb7c']['currentPage'] = "serp"; window['result_237140e37987cb7c']['sponsored'] = false;window['result_237140e37987cb7c']['reportJobButtonEnabled'] = false; window['result_237140e37987cb7c']['showMyJobsHired'] = false; window['result_237140e37987cb7c']['showSaveForSponsored'] = false; window['result_237140e37987cb7c']['showJobAge'] = true; window['result_237140e37987cb7c']['showHolisticCard'] = true; window['result_237140e37987cb7c']['showDislike'] = true; window['result_237140e37987cb7c']['showKebab'] = true; window['result_237140e37987cb7c']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_9" style="display:none;"><div class="more_actions" id="more_9"><ul><li><span class="mat">View all <a href="/q-Counter-Threat-Solutions-l-Warrenton,-VA-jobs.html">Counter Threat Solutions jobs in Warrenton, VA</a> - <a href="/l-Warrenton,-VA-jobs.html">Warrenton jobs</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/Counter-Threat-Solutions" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=237140e37987cb7c&from=serp-more&campaignid=serp-more&jcid=67862a95cf1f198e');">Counter Threat Solutions</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('237140e37987cb7c'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_237140e37987cb7c_sj"></div>
<div class="mosaic-zone" id="mosaic-zone-afterTenthJobResult"></div><script type="text/javascript">
try {
window.mosaic.onMosaicApiReady(function() {
var zoneId = 'afterTenthJobResult';
var providers = window.mosaic.zonedProviders[zoneId];
if (providers) {
providers.filter(function(p) { return window.mosaic.lazyFns[p]; }).forEach(function(p) {
return window.mosaic.api.loadProvider(p);
});
}
});
} catch (e) {};
</script><div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="6c4df1a39d0a980d" data-tn-component="organicJob" id="p_6c4df1a39d0a980d">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=6c4df1a39d0a980d&fccid=f1374be6a45f4b8a&vjs=3" id="jl_6c4df1a39d0a980d" onclick="setRefineByCookie([]); return rclk(this,jobmap[10],true,0);" onmousedown="return rclk(this,jobmap[10],0);" rel="noopener nofollow" target="_blank" title="Corporate Security Intelligence Analyst">
Corporate Security <b>Intelligence</b> <b>Analyst</b></a>
</h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/Intel" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=6c4df1a39d0a980d&jcid=f1374be6a45f4b8a')" rel="noopener" target="_blank">
Intel</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/Intel/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Corporate+Security+Intelligence+Analyst&fromjk=6c4df1a39d0a980d&jcid=f1374be6a45f4b8a');" rel="noopener" target="_blank" title="Intel reviews">
<span class="ratingsContent">
4.1<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="Phoenix, AZ" id="recJobLoc_6c4df1a39d0a980d" style="display: none"></div>
<span class="location accessible-contrast-color-location">Phoenix, AZ 85018 <span style="font-size: smaller">(Camelback East area)</span></span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>The candidate must have a bachelor's degree with 6+ years of experience or a Master's degree with 4+ years of experience.</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">18 days ago</span><span class="tt_set" id="tt_set_10"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('6c4df1a39d0a980d', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('6c4df1a39d0a980d', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, '6c4df1a39d0a980d', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('6c4df1a39d0a980d');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_6c4df1a39d0a980d" onclick="changeJobState('6c4df1a39d0a980d', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_10" onclick="toggleMoreLinks('6c4df1a39d0a980d', '10'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_6c4df1a39d0a980d" style="display:none;"></div><script>if (!window['result_6c4df1a39d0a980d']) {window['result_6c4df1a39d0a980d'] = {};}window['result_6c4df1a39d0a980d']['showSource'] = false; window['result_6c4df1a39d0a980d']['source'] = "Intel"; window['result_6c4df1a39d0a980d']['loggedIn'] = false; window['result_6c4df1a39d0a980d']['showMyJobsLinks'] = false;window['result_6c4df1a39d0a980d']['undoAction'] = "unsave";window['result_6c4df1a39d0a980d']['relativeJobAge'] = "18 days ago";window['result_6c4df1a39d0a980d']['jobKey'] = "6c4df1a39d0a980d"; window['result_6c4df1a39d0a980d']['myIndeedAvailable'] = true; window['result_6c4df1a39d0a980d']['showMoreActionsLink'] = window['result_6c4df1a39d0a980d']['showMoreActionsLink'] || true; window['result_6c4df1a39d0a980d']['resultNumber'] = 10; window['result_6c4df1a39d0a980d']['jobStateChangedToSaved'] = false; window['result_6c4df1a39d0a980d']['searchState'] = "q=intelligence analyst&start=2"; window['result_6c4df1a39d0a980d']['basicPermaLink'] = "https://www.indeed.com"; window['result_6c4df1a39d0a980d']['saveJobFailed'] = false; window['result_6c4df1a39d0a980d']['removeJobFailed'] = false; window['result_6c4df1a39d0a980d']['requestPending'] = false; window['result_6c4df1a39d0a980d']['notesEnabled'] = true; window['result_6c4df1a39d0a980d']['currentPage'] = "serp"; window['result_6c4df1a39d0a980d']['sponsored'] = false;window['result_6c4df1a39d0a980d']['reportJobButtonEnabled'] = false; window['result_6c4df1a39d0a980d']['showMyJobsHired'] = false; window['result_6c4df1a39d0a980d']['showSaveForSponsored'] = false; window['result_6c4df1a39d0a980d']['showJobAge'] = true; window['result_6c4df1a39d0a980d']['showHolisticCard'] = true; window['result_6c4df1a39d0a980d']['showDislike'] = true; window['result_6c4df1a39d0a980d']['showKebab'] = true; window['result_6c4df1a39d0a980d']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_10" style="display:none;"><div class="more_actions" id="more_10"><ul><li><span class="mat">View all <a href="/q-Intel-l-Phoenix,-AZ-jobs.html">Intel jobs in Phoenix, AZ</a> - <a href="/l-Phoenix,-AZ-jobs.html">Phoenix jobs</a></span></li><li><span class="mat">Salary Search: <a href="/salaries/intelligence-analyst-Salaries,-Phoenix-AZ" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=serp-more&fromjk=6c4df1a39d0a980d&from=serp-more');">Intelligence Analyst salaries in Phoenix, AZ</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/Intel/about" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=6c4df1a39d0a980d&from=serp-more&campaignid=serp-more&jcid=f1374be6a45f4b8a');">Intel</a></span></li><li><span class="mat">See popular <a href="/cmp/Intel/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=6c4df1a39d0a980d&jcid=f1374be6a45f4b8a');">questions & answers about Intel</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('6c4df1a39d0a980d'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_6c4df1a39d0a980d_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="cc2d9a09ebbf5966" data-tn-component="organicJob" id="p_cc2d9a09ebbf5966">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=cc2d9a09ebbf5966&fccid=595d42593839d3a2&vjs=3" id="jl_cc2d9a09ebbf5966" onclick="setRefineByCookie([]); return rclk(this,jobmap[11],true,0);" onmousedown="return rclk(this,jobmap[11],0);" rel="noopener nofollow" target="_blank" title="Global Security Investigative Analyst*">
Global Security Investigative <b>Analyst</b>*</a>
</h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/3M" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=cc2d9a09ebbf5966&jcid=595d42593839d3a2')" rel="noopener" target="_blank">
3M</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/3M/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Global+Security+Investigative+Analyst*&fromjk=cc2d9a09ebbf5966&jcid=595d42593839d3a2');" rel="noopener" target="_blank">
<span class="ratingsContent">
4.0<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="Maplewood, MN" id="recJobLoc_cc2d9a09ebbf5966" style="display: none"></div>
<span class="location accessible-contrast-color-location">Maplewood, MN</span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>To comply with these laws, and in conjunction with the review of candidates for those positions within 3M that may present access to export controlled technical…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">22 days ago</span><span class="tt_set" id="tt_set_11"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('cc2d9a09ebbf5966', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('cc2d9a09ebbf5966', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, 'cc2d9a09ebbf5966', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('cc2d9a09ebbf5966');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_cc2d9a09ebbf5966" onclick="changeJobState('cc2d9a09ebbf5966', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_11" onclick="toggleMoreLinks('cc2d9a09ebbf5966', '11'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_cc2d9a09ebbf5966" style="display:none;"></div><script>if (!window['result_cc2d9a09ebbf5966']) {window['result_cc2d9a09ebbf5966'] = {};}window['result_cc2d9a09ebbf5966']['showSource'] = false; window['result_cc2d9a09ebbf5966']['source'] = "3M"; window['result_cc2d9a09ebbf5966']['loggedIn'] = false; window['result_cc2d9a09ebbf5966']['showMyJobsLinks'] = false;window['result_cc2d9a09ebbf5966']['undoAction'] = "unsave";window['result_cc2d9a09ebbf5966']['relativeJobAge'] = "22 days ago";window['result_cc2d9a09ebbf5966']['jobKey'] = "cc2d9a09ebbf5966"; window['result_cc2d9a09ebbf5966']['myIndeedAvailable'] = true; window['result_cc2d9a09ebbf5966']['showMoreActionsLink'] = window['result_cc2d9a09ebbf5966']['showMoreActionsLink'] || true; window['result_cc2d9a09ebbf5966']['resultNumber'] = 11; window['result_cc2d9a09ebbf5966']['jobStateChangedToSaved'] = false; window['result_cc2d9a09ebbf5966']['searchState'] = "q=intelligence analyst&start=2"; window['result_cc2d9a09ebbf5966']['basicPermaLink'] = "https://www.indeed.com"; window['result_cc2d9a09ebbf5966']['saveJobFailed'] = false; window['result_cc2d9a09ebbf5966']['removeJobFailed'] = false; window['result_cc2d9a09ebbf5966']['requestPending'] = false; window['result_cc2d9a09ebbf5966']['notesEnabled'] = true; window['result_cc2d9a09ebbf5966']['currentPage'] = "serp"; window['result_cc2d9a09ebbf5966']['sponsored'] = false;window['result_cc2d9a09ebbf5966']['reportJobButtonEnabled'] = false; window['result_cc2d9a09ebbf5966']['showMyJobsHired'] = false; window['result_cc2d9a09ebbf5966']['showSaveForSponsored'] = false; window['result_cc2d9a09ebbf5966']['showJobAge'] = true; window['result_cc2d9a09ebbf5966']['showHolisticCard'] = true; window['result_cc2d9a09ebbf5966']['showDislike'] = true; window['result_cc2d9a09ebbf5966']['showKebab'] = true; window['result_cc2d9a09ebbf5966']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_11" style="display:none;"><div class="more_actions" id="more_11"><ul><li><span class="mat">View all <a href="/q-3M-l-Maplewood,-MN-jobs.html">3M jobs in Maplewood, MN</a> - <a href="/l-Maplewood,-MN-jobs.html">Maplewood jobs</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/3M/about" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=cc2d9a09ebbf5966&from=serp-more&campaignid=serp-more&jcid=595d42593839d3a2');">3M</a></span></li><li><span class="mat">See popular <a href="/cmp/3M/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=cc2d9a09ebbf5966&jcid=595d42593839d3a2');">questions & answers about 3M</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('cc2d9a09ebbf5966'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_cc2d9a09ebbf5966_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="d85cd056c96f7fe4" data-tn-component="organicJob" id="p_d85cd056c96f7fe4">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=d85cd056c96f7fe4&fccid=2e9c2f21eed04603&vjs=3" id="jl_d85cd056c96f7fe4" onclick="setRefineByCookie([]); return rclk(this,jobmap[12],true,0);" onmousedown="return rclk(this,jobmap[12],0);" rel="noopener nofollow" target="_blank" title="Global Intelligence Analyst">
Global <b>Intelligence</b> <b>Analyst</b></a>
</h2>
<div class="sjcl">
<div>
<span class="company">
AlertMedia</span>
</div>
<div class="recJobLoc" data-rc-loc="Austin, TX" id="recJobLoc_d85cd056c96f7fe4" style="display: none"></div>
<span class="location accessible-contrast-color-location">Austin, TX</span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>You have previous job experience in the news media, as a security or <b>intelligence</b> <b>analyst</b>, or another role which required you to be mission-focused, analytical,…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">24 days ago</span><span class="tt_set" id="tt_set_12"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('d85cd056c96f7fe4', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('d85cd056c96f7fe4', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, 'd85cd056c96f7fe4', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('d85cd056c96f7fe4');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_d85cd056c96f7fe4" onclick="changeJobState('d85cd056c96f7fe4', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_12" onclick="toggleMoreLinks('d85cd056c96f7fe4', '12'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_d85cd056c96f7fe4" style="display:none;"></div><script>if (!window['result_d85cd056c96f7fe4']) {window['result_d85cd056c96f7fe4'] = {};}window['result_d85cd056c96f7fe4']['showSource'] = false; window['result_d85cd056c96f7fe4']['source'] = "AlertMedia"; window['result_d85cd056c96f7fe4']['loggedIn'] = false; window['result_d85cd056c96f7fe4']['showMyJobsLinks'] = false;window['result_d85cd056c96f7fe4']['undoAction'] = "unsave";window['result_d85cd056c96f7fe4']['relativeJobAge'] = "24 days ago";window['result_d85cd056c96f7fe4']['jobKey'] = "d85cd056c96f7fe4"; window['result_d85cd056c96f7fe4']['myIndeedAvailable'] = true; window['result_d85cd056c96f7fe4']['showMoreActionsLink'] = window['result_d85cd056c96f7fe4']['showMoreActionsLink'] || true; window['result_d85cd056c96f7fe4']['resultNumber'] = 12; window['result_d85cd056c96f7fe4']['jobStateChangedToSaved'] = false; window['result_d85cd056c96f7fe4']['searchState'] = "q=intelligence analyst&start=2"; window['result_d85cd056c96f7fe4']['basicPermaLink'] = "https://www.indeed.com"; window['result_d85cd056c96f7fe4']['saveJobFailed'] = false; window['result_d85cd056c96f7fe4']['removeJobFailed'] = false; window['result_d85cd056c96f7fe4']['requestPending'] = false; window['result_d85cd056c96f7fe4']['notesEnabled'] = true; window['result_d85cd056c96f7fe4']['currentPage'] = "serp"; window['result_d85cd056c96f7fe4']['sponsored'] = false;window['result_d85cd056c96f7fe4']['reportJobButtonEnabled'] = false; window['result_d85cd056c96f7fe4']['showMyJobsHired'] = false; window['result_d85cd056c96f7fe4']['showSaveForSponsored'] = false; window['result_d85cd056c96f7fe4']['showJobAge'] = true; window['result_d85cd056c96f7fe4']['showHolisticCard'] = true; window['result_d85cd056c96f7fe4']['showDislike'] = true; window['result_d85cd056c96f7fe4']['showKebab'] = true; window['result_d85cd056c96f7fe4']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_12" style="display:none;"><div class="more_actions" id="more_12"><ul><li><span class="mat">View all <a href="/q-Alertmedia-l-Austin,-TX-jobs.html">AlertMedia jobs in Austin, TX</a> - <a href="/l-Austin,-TX-jobs.html">Austin jobs</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/Alertmedia" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=d85cd056c96f7fe4&from=serp-more&campaignid=serp-more&jcid=f477dacf102a0454');">AlertMedia</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('d85cd056c96f7fe4'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_d85cd056c96f7fe4_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="c0f5ef623ab038c0" data-tn-component="organicJob" id="p_c0f5ef623ab038c0">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=c0f5ef623ab038c0&fccid=63dc47ddbc3a4e7d&vjs=3" id="jl_c0f5ef623ab038c0" onclick="setRefineByCookie([]); return rclk(this,jobmap[13],true,1);" onmousedown="return rclk(this,jobmap[13],1);" rel="noopener nofollow" target="_blank" title="Criminal Intelligence Analyst, Senior">
Criminal <b>Intelligence</b> <b>Analyst</b>, Senior</a>
<span class="new">new</span></h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/City-of-Dallas" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=c0f5ef623ab038c0&jcid=167dbeef01e893ad')" rel="noopener" target="_blank">
City of Dallas, TX</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/City-of-Dallas/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Criminal+Intelligence+Analyst%2C+Senior&fromjk=c0f5ef623ab038c0&jcid=167dbeef01e893ad');" rel="noopener" target="_blank" title="City of Dallas reviews">
<span class="ratingsContent">
3.8<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="Dallas, TX" id="recJobLoc_c0f5ef623ab038c0" style="display: none"></div>
<span class="location accessible-contrast-color-location">Dallas, TX 75201 <span style="font-size: smaller">(Government District area)</span></span>
</div>
<div class="salarySnippet holisticSalary">
<span class="salary no-wrap">
<span class="salaryText">
$41,490 - $60,086 a year</span>
</span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>Bachelor's degree in a criminal justice, forensic science, terrorism/counterterrorism, cyber security, information technology, geographic science, political…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">6 days ago</span><span class="tt_set" id="tt_set_13"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('c0f5ef623ab038c0', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('c0f5ef623ab038c0', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, 'c0f5ef623ab038c0', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('c0f5ef623ab038c0');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_c0f5ef623ab038c0" onclick="changeJobState('c0f5ef623ab038c0', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_13" onclick="toggleMoreLinks('c0f5ef623ab038c0', '13'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_c0f5ef623ab038c0" style="display:none;"></div><script>if (!window['result_c0f5ef623ab038c0']) {window['result_c0f5ef623ab038c0'] = {};}window['result_c0f5ef623ab038c0']['showSource'] = false; window['result_c0f5ef623ab038c0']['source'] = "City of Dallas, TX"; window['result_c0f5ef623ab038c0']['loggedIn'] = false; window['result_c0f5ef623ab038c0']['showMyJobsLinks'] = false;window['result_c0f5ef623ab038c0']['undoAction'] = "unsave";window['result_c0f5ef623ab038c0']['relativeJobAge'] = "6 days ago";window['result_c0f5ef623ab038c0']['jobKey'] = "c0f5ef623ab038c0"; window['result_c0f5ef623ab038c0']['myIndeedAvailable'] = true; window['result_c0f5ef623ab038c0']['showMoreActionsLink'] = window['result_c0f5ef623ab038c0']['showMoreActionsLink'] || true; window['result_c0f5ef623ab038c0']['resultNumber'] = 13; window['result_c0f5ef623ab038c0']['jobStateChangedToSaved'] = false; window['result_c0f5ef623ab038c0']['searchState'] = "q=intelligence analyst&start=2"; window['result_c0f5ef623ab038c0']['basicPermaLink'] = "https://www.indeed.com"; window['result_c0f5ef623ab038c0']['saveJobFailed'] = false; window['result_c0f5ef623ab038c0']['removeJobFailed'] = false; window['result_c0f5ef623ab038c0']['requestPending'] = false; window['result_c0f5ef623ab038c0']['notesEnabled'] = true; window['result_c0f5ef623ab038c0']['currentPage'] = "serp"; window['result_c0f5ef623ab038c0']['sponsored'] = false;window['result_c0f5ef623ab038c0']['reportJobButtonEnabled'] = false; window['result_c0f5ef623ab038c0']['showMyJobsHired'] = false; window['result_c0f5ef623ab038c0']['showSaveForSponsored'] = false; window['result_c0f5ef623ab038c0']['showJobAge'] = true; window['result_c0f5ef623ab038c0']['showHolisticCard'] = true; window['result_c0f5ef623ab038c0']['showDislike'] = true; window['result_c0f5ef623ab038c0']['showKebab'] = true; window['result_c0f5ef623ab038c0']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_13" style="display:none;"><div class="more_actions" id="more_13"><ul><li><span class="mat">View all <a href="/q-City-of-Dallas,-Tx-l-Dallas,-TX-jobs.html">City of Dallas, TX jobs in Dallas, TX</a> - <a href="/l-Dallas,-TX-jobs.html">Dallas jobs</a></span></li><li><span class="mat">Salary Search: <a href="/salaries/intelligence-analyst-Salaries,-Dallas-TX" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=serp-more&fromjk=c0f5ef623ab038c0&from=serp-more');">Intelligence Analyst salaries in Dallas, TX</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/City-of-Dallas" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=c0f5ef623ab038c0&from=serp-more&campaignid=serp-more&jcid=167dbeef01e893ad');">City of Dallas, TX</a></span></li><li><span class="mat">See popular <a href="/cmp/City-of-Dallas/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=c0f5ef623ab038c0&jcid=167dbeef01e893ad');">questions & answers about City of Dallas, TX</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('c0f5ef623ab038c0'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_c0f5ef623ab038c0_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="dd390e7e0b7f6c33" data-tn-component="organicJob" id="p_dd390e7e0b7f6c33">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=dd390e7e0b7f6c33&fccid=f7282ad3490137c7&vjs=3" id="jl_dd390e7e0b7f6c33" onclick="setRefineByCookie([]); return rclk(this,jobmap[14],true,0);" onmousedown="return rclk(this,jobmap[14],0);" rel="noopener nofollow" target="_blank" title="Open Source Intelligence Analyst">
Open Source <b>Intelligence</b> <b>Analyst</b></a>
<span class="new">new</span></h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/University-of-Texas-At-Austin" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=dd390e7e0b7f6c33&jcid=f7282ad3490137c7')" rel="noopener" target="_blank">
University of Texas at Austin</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/University-of-Texas-At-Austin/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Open+Source+Intelligence+Analyst&fromjk=dd390e7e0b7f6c33&jcid=f7282ad3490137c7');" rel="noopener" target="_blank">
<span class="ratingsContent">
4.3<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="Austin, TX" id="recJobLoc_dd390e7e0b7f6c33" style="display: none"></div>
<span class="location accessible-contrast-color-location">Austin, TX 78712 <span style="font-size: smaller">(University of Texas area)</span></span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>Bachelor’s degree in any discipline with three (3) years of directly related research experience OR an associate degree and five (5) years of directly related…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">1 day ago</span><span class="tt_set" id="tt_set_14"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('dd390e7e0b7f6c33', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('dd390e7e0b7f6c33', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, 'dd390e7e0b7f6c33', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('dd390e7e0b7f6c33');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_dd390e7e0b7f6c33" onclick="changeJobState('dd390e7e0b7f6c33', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_14" onclick="toggleMoreLinks('dd390e7e0b7f6c33', '14'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_dd390e7e0b7f6c33" style="display:none;"></div><script>if (!window['result_dd390e7e0b7f6c33']) {window['result_dd390e7e0b7f6c33'] = {};}window['result_dd390e7e0b7f6c33']['showSource'] = false; window['result_dd390e7e0b7f6c33']['source'] = "University of Texas at Austin"; window['result_dd390e7e0b7f6c33']['loggedIn'] = false; window['result_dd390e7e0b7f6c33']['showMyJobsLinks'] = false;window['result_dd390e7e0b7f6c33']['undoAction'] = "unsave";window['result_dd390e7e0b7f6c33']['relativeJobAge'] = "1 day ago";window['result_dd390e7e0b7f6c33']['jobKey'] = "dd390e7e0b7f6c33"; window['result_dd390e7e0b7f6c33']['myIndeedAvailable'] = true; window['result_dd390e7e0b7f6c33']['showMoreActionsLink'] = window['result_dd390e7e0b7f6c33']['showMoreActionsLink'] || true; window['result_dd390e7e0b7f6c33']['resultNumber'] = 14; window['result_dd390e7e0b7f6c33']['jobStateChangedToSaved'] = false; window['result_dd390e7e0b7f6c33']['searchState'] = "q=intelligence analyst&start=2"; window['result_dd390e7e0b7f6c33']['basicPermaLink'] = "https://www.indeed.com"; window['result_dd390e7e0b7f6c33']['saveJobFailed'] = false; window['result_dd390e7e0b7f6c33']['removeJobFailed'] = false; window['result_dd390e7e0b7f6c33']['requestPending'] = false; window['result_dd390e7e0b7f6c33']['notesEnabled'] = true; window['result_dd390e7e0b7f6c33']['currentPage'] = "serp"; window['result_dd390e7e0b7f6c33']['sponsored'] = false;window['result_dd390e7e0b7f6c33']['reportJobButtonEnabled'] = false; window['result_dd390e7e0b7f6c33']['showMyJobsHired'] = false; window['result_dd390e7e0b7f6c33']['showSaveForSponsored'] = false; window['result_dd390e7e0b7f6c33']['showJobAge'] = true; window['result_dd390e7e0b7f6c33']['showHolisticCard'] = true; window['result_dd390e7e0b7f6c33']['showDislike'] = true; window['result_dd390e7e0b7f6c33']['showKebab'] = true; window['result_dd390e7e0b7f6c33']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_14" style="display:none;"><div class="more_actions" id="more_14"><ul><li><span class="mat">View all <a href="/q-University-of-Texas-At-Austin-l-Austin,-TX-jobs.html">University of Texas at Austin jobs in Austin, TX</a> - <a href="/l-Austin,-TX-jobs.html">Austin jobs</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/University-of-Texas-At-Austin" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=dd390e7e0b7f6c33&from=serp-more&campaignid=serp-more&jcid=f7282ad3490137c7');">University of Texas at Austin</a></span></li><li><span class="mat">See popular <a href="/cmp/University-of-Texas-At-Austin/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=dd390e7e0b7f6c33&jcid=f7282ad3490137c7');">questions & answers about University of Texas at Austin</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('dd390e7e0b7f6c33'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_dd390e7e0b7f6c33_sj"></div>
<script type="text/javascript">
function ptk(st,p) {
document.cookie = 'PTK="tk=&type=jobsearch&subtype=' + st + (p ? '&' + p : '')
+ (st == 'pagination' ? '&fp=2' : '')
+'"; path=/';
}
</script>
<script type="text/javascript">
function pclk(event) {
var evt = event || window.event;
var target = evt.target || evt.srcElement;
var el = target.nodeType == 1 ? target : target.parentNode;
var tag = el.tagName.toLowerCase();
if (tag == 'span' || tag == 'a') {
ptk('pagination');
}
return true;
}
function addPPUrlParam(obj) {
var pp = obj.getAttribute('data-pp');
var href = obj.getAttribute('href');
if (pp && href) {
obj.setAttribute('href', href + '&pp=' + pp);
}
}
</script>
<nav aria-label="pagination" role="navigation"><div class="pagination" onmousedown="pclk(event);">
<ul class="pagination-list"><li><a aria-label="Previous" href="/jobs?q=intelligence+analyst" rel="nofollow"><span class="pn"><span class="np"><svg fill="none" height="24" width="24"><path d="M15.41 7.41L14 6l-6 6 6 6 1.41-1.41L10.83 12l4.58-4.59z" fill="#2D2D2D"></path></svg></span></span></a></li><li><a aria-label="1" href="/jobs?q=intelligence+analyst" rel="nofollow"><span class="pn">1</span></a></li><li><b aria-current="true" aria-label="2" tabindex="0">2</b></li><li><a aria-label="3" data-pp="gQAeAAAAAAAAAAAAAAABjxaVBQBMAQEBDBudsyoBaUFPuS9zJOs5W8uhmEDrC1DtSNjdARdkncYKyi7RAQIgturjzotnJRyK2G2QSffnXTC_-G2GttbpEoAIA5kcD966xgAA" href="/jobs?q=intelligence+analyst&start=20" onmousedown="addPPUrlParam && addPPUrlParam(this);" rel="nofollow"><span class="pn">3</span></a></li><li><a aria-label="4" data-pp="gQAtAAAAAAAAAAAAAAABjxaVBQBrAQIBDCIHASUrrPLzvzHNVnInvIdZObIA_wtwjOKpvAQzewcdhc1j0L6JCwDm5eFZWqEuWxaSyio2W5P4ZqZwPvBydVlZxX3X1zZUAtj-CB2q77mS-b_UMz5P-Z3otvoYYCOi_j6RDD07k1sAAA" href="/jobs?q=intelligence+analyst&start=30" onmousedown="addPPUrlParam && addPPUrlParam(this);" rel="nofollow"><span class="pn">4</span></a></li><li><a aria-label="Next" data-pp="gQAeAAAAAAAAAAAAAAABjxaVBQBMAQEBDBudsyoBaUFPuS9zJOs5W8uhmEDrC1DtSNjdARdkncYKyi7RAQIgturjzotnJRyK2G2QSffnXTC_-G2GttbpEoAIA5kcD966xgAA" href="/jobs?q=intelligence+analyst&start=20" onmousedown="addPPUrlParam && addPPUrlParam(this);" rel="nofollow"><span class="pn"><span class="np"><svg fill="none" height="24" width="24"><path d="M10 6L8.59 7.41 13.17 12l-4.58 4.59L10 18l6-6-6-6z" fill="#2D2D2D"></path></svg></span></span></a></li></ul></div>
</nav><div class="mosaic-zone" id="mosaic-zone-belowJobResultsPagination"><div class="mosaic mosaic-provider-jsfe-career-questions" id="mosaic-provider-jsfe-career-questions"></div></div><script type="text/javascript">
try {
window.mosaic.onMosaicApiReady(function() {
var zoneId = 'belowJobResultsPagination';
var providers = window.mosaic.zonedProviders[zoneId];
if (providers) {
providers.filter(function(p) { return window.mosaic.lazyFns[p]; }).forEach(function(p) {
return window.mosaic.api.loadProvider(p);
});
}
});
} catch (e) {};
</script></td>
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp9.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp9.indeed', conn)
df[:]
df = pandas.read_sql_query('select count(*) as count,job_title from gp9.indeed group by job_title order by count desc ', conn)
df.plot.bar(x='job_title')
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
_____no_output_____
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp28.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=3'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp28.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from gp28.indeed group by job_title order by count desc', conn)
df.plot.bar(x='job_title')
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the hosue table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp27.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = "https://www.trulia.com/SC/Charleston/"
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp27.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp27.house ', conn)
df[:10]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the house table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp1.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/VA/Fairfax/22032/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
# print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
# print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp1.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp1.house ', conn)
df[:]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
_____no_output_____
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp17.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=2'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp17.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select count(*) as count, job_title from gp17.indeed group by job_title order by count desc',conn)
df.plot.bar(x= 'job_title')
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp20.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=2'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp20.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from gp20.indeed group by job_title order by count desc ', conn)
df.plot.bar(x='job_title')
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the house table make sure change the schema name to your gp number
###Code
table_sql = """
CREATE TABLE IF NOT EXISTS gp2.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/VA/Madison/22727/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp2.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp2.house ', conn)
df[:]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the hosue table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp7.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/VA/Mount_Vernon/22309/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
# print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
# print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp7.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp7.house ', conn)
df[:]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
_____no_output_____
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp5.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=5'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp5.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
View the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from gp5.indeed group by job_title order by count desc', conn)
df[:]
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from gp5.indeed group by job_title order by count desc ', conn)
df.plot.bar(x='job_title')
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
_____no_output_____
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp7.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=2'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp7.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp7.indeed', conn)
df[:]
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
_____no_output_____
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp15.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=2'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp15.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from gp15.indeed group by job_title order by count desc ', conn)
df.plot.bar(x='job_title')
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
lab 6 import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the hosue table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp20.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/for_sale/Little_Rock,AR/8_zm/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
# print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
# print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp20.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp20.house ', conn)
df[:10]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
_____no_output_____
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp24.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=2'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp24.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from demo.indeed group by job_title order by count desc ', conn)
df.plot.bar(x='job_title')
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the house table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp17.house1
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/VA/Alexandria/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp17.house1(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp17.house1 ', conn)
df[:10]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the house table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp4.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/VA/Virginia_Beach/23451/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
# print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
# print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp4.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp4.house ', conn)
df[:]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the hosue table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp25.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/VA/Leesburg/20176/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
# print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
# print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp25.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp25.house ', conn)
df[:]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the hosue table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp15.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/IL/Lake_Forest/60045/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp15.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp15.house ', conn)
df[:]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp1.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=2'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp1.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
view the table
###Code
df = pandas.read_sql_query('select * from gp1.indeed',conn)
df[:]
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from gp1.indeed group by job_title order by count desc ', conn)
df.plot.bar(x='job_title')
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
Import Libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Create House Table
###Code
table_sql = """
CREATE TABLE IF NOT EXISTS gp8.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Define Search Region
###Code
url = 'https://www.trulia.com/NJ/Hamilton/08690/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Insert Records Into Database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp8.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp8.house ', conn)
df[:10]
###Output
_____no_output_____
###Markdown
Basic Stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
Price Distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
Bed vs Bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp21.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=2'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp21.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from gp21.indeed group by job_title order by count desc ', conn)
df.plot.bar(x='job_title')
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the hosue table
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp29.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/VA/McLean/22101/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp29.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp29.house ', conn)
df[:10]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the hosue table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp22.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/VA/Ashburn/20148/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
# print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
# print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp22.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp22.house ', conn)
df[:]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp26.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=2'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp26.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
View the Table
###Code
df = pandas.read_sql_query('select * from gp26.indeed', conn)
df[:]
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the hosue table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp17.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/VA/Chesapeake/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
# print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp17.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp17.house ', conn)
df[:]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp27.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=2'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp27.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from gp27.indeed group by job_title order by count desc ', conn)
df.plot.bar(x='job_title')
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the house table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp26.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists gp26.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/VA/Winchester/22602/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp26.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp26.house ', conn)
df[:]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the house table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp16.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/VA/Ruther_Glen/22546/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp16.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp16.house ', conn)
df[:]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the house table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp6.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/VA/Harrisonburg/90027/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
# print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
# print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp6.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp6.house ', conn)
df[:10]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
Q3
###Code
book = xlwt.Workbook()
sheet = book.add_sheet('va_pop')
i=0
if html_str:
json_data = json.loads(html_str)
for record in json_data:
total_pop, male_pop, count_name, state, count_num, = record
sheet.write(i,0,total_pop)
sheet.write(i,1,male_pop)
i +=1
book.save('census.xlsx')
###Output
_____no_output_____
###Markdown
Q4
###Code
#4. Load the data from the Excel into your notebook, and print the first ten rows.
book = xlrd.open_workbook('census.xlsx')
my_sheet = book.sheet_by_name('va_pop')
print(my_sheet.nrows) #nrows tell number of rows in a sheet
for i in range(1,11):
row = my_sheet.row_values(i)
total,male=row
print (total,male)
###Output
33060 16125
104287 49946
15919 7788
12793 6642
31999 15346
15314 7424
226092 112644
74330 37572
4558 2465
76933 37888
###Markdown
Q5
###Code
#5. Read your Excel file in python, add a new column, and calculate the male/total ratio for each county.
read_book = xlrd.open_workbook('census.xlsx')
my_sheet= book.sheet_by_name('va_pop')
write_book = copy(read_book)
write_sheet = write_book.get_sheet(0)
num_rows = my_sheet.nrows
for i in range(num_rows):
row = my_sheet.row_values(i)
total,male =row
if i ==0:
write_sheet.write(i,2,'ratio')
else:
write_sheet.write(i,2,int(male)/int(total))
write_book.save('censusq5.xlsx')
###Output
_____no_output_____
###Markdown
Q6
###Code
#6. Load the data from the Excel into your notebook and print the first ten rows with the new column.
book= xlrd.open_workbook('censusq5.xlsx')
sheet = book.sheet_by_name('va_pop')
print (my_sheet.nrows)
for i in range(1,11):
row = sheet.row_values(i)
total,male,ratio=row
print (total,male,ratio)
###Output
33060 16125 0.48774954627949185
104287 49946 0.4789283419793455
15919 7788 0.48922671022049125
12793 6642 0.5191901821308528
31999 15346 0.4795774867964624
15314 7424 0.48478516390231163
226092 112644 0.4982219627408312
74330 37572 0.5054755818646576
4558 2465 0.5408073716542343
76933 37888 0.4924804700193675
###Markdown
Simulating Language, Lab 6, Compositionality from iterated learning In this lab, we'll be building a replication of the simulation in [Kirby et al (2015)](https://www.sciencedirect.com/science/article/pii/S0010027715000815?via%3Dihub) which looks at how compositional structure can evolve if language is both transmitted to new learners each generation *and* used for communication. This is a pretty close replication of the original paper, but with a noteable simplification, namely that learners assume that they are learning a single language (even if that language might actually have been generated by multiple speakers who might each have been speaking a different language). This simplification doesn't seem to alter the results much and means we don't need a supercomputer to run the simulations, which is a bonus! Representing meanings, signals, and grammarsUnlike the language models we've been working with so far, in order to look at compositional structure we have to allow meanings and signals (words or sentences, depending on how you think of them - you might think of them as *forms* if you like a general, slightly ambiguous term) to consist of component parts: in a compositional language, the signal associated with a meaning depends in a predictable way on the components of that meaning, with each part of the signal conveying part of the meaning. In order to keep thing manageable we're using a very simple meaning space: each meaning consists of two features, each of which can take two possible values, which means there are 4 possible meanings our language has to encode. If it helps, you can think of the first meaning feature as corresponding to shape, and the second to colour. Then `0` might be *square*, `1` might be *circle*, `2` could be *red*, and `3` could be *blue*. In this way `02` represents the meaning *red square*. In the same way, our signal space consists of just four possible sentences (two-letter strings made up of *a*s and *b*s, i.e. `aa`, `ab`, `ba`, `bb`). Again, you can imagine that `a` and `b` correspond to different words and each signal consists of a two-word sentence, or you can imagine that they are morphemes and each signal consists of a multi-morphmeic word.
###Code
meanings = ['02', '03', '12', '13']
signals = ['aa', 'ab', 'ba', 'bb']
###Output
_____no_output_____
###Markdown
Now we have a representation of meanings and signals we can represent a language, which (like in Lab 5) is a list of pairings of meanings and their associated signals. We are using a slightly different representation this time: each language consists of exactly 4 entries - four meaning-signal pairings, one signal for each meaning. As in Lab 5, each item is a *pair*: the first item in the pair is the meaning, and the second is the signal. For example, here is a degenerate language, where every meaning is expressed using the same signal:```pythona_degenerate_language = [('02', 'aa'), ('03', 'aa'), ('12', 'aa'), ('13', 'aa')]```Notice that this is a bit different from how we were representing ambiguous signals in Lab 5, where meanings were represented as sets. And here is a compositional language, where there is a reliable correspondence between components of the meaning and components of the signal that expresses it:```pythona_compositional_language = [('02', 'aa'), ('03', 'ab'), ('12', 'ba'), ('13', 'bb')]``` Check that you understand how meanings, signals and languages are represented, and why `a_compositional_language` is compositional, then create another degenerate language and another compositional language. Now that we have defined what a language looks like, we can lay out the hypothesis space - the space of all possible languages - and the priors for those languages. Before we go any further, how many possible languages do you think there will be, given that we have only 4 meanings to express and only 4 possible signals to express them?The process of enumerating the possible languages and calculating their prior probability is actually slightly involved: the prior for each language depends on its coding length, so we have to write down a mini grammar for each language, calculate its coding length, and then work out the prior based on that. Rather than going through all this code here, we are simply going to provide you with lists of all the possible languages (`possible_languages`), and their (log) prior probabilities (`log_priors`), which we prepared in advance based on the method in the Kirby et al. (2015) paper: the nth item in the `log_priors` list is the prior for the nth langauge in `possible_languages`.Additionally we provide a list of *types* for each language (in the same order as the `possible_languages` list). Type `0` means *degenerate*, type `1` means *holistic*, type `2` is *other* (e.g. languages that are partially degenerate), and type `3` is compositional.
###Code
possible_languages = [[('02', 'aa'), ('03', 'aa'), ('12', 'aa'), ('13', 'aa')], [('02', 'aa'), ('03', 'aa'), ('12', 'aa'), ('13', 'ab')], [('02', 'aa'), ('03', 'aa'), ('12', 'aa'), ('13', 'ba')], [('02', 'aa'), ('03', 'aa'), ('12', 'aa'), ('13', 'bb')], [('02', 'aa'), ('03', 'aa'), ('12', 'ab'), ('13', 'aa')], [('02', 'aa'), ('03', 'aa'), ('12', 'ab'), ('13', 'ab')], [('02', 'aa'), ('03', 'aa'), ('12', 'ab'), ('13', 'ba')], [('02', 'aa'), ('03', 'aa'), ('12', 'ab'), ('13', 'bb')], [('02', 'aa'), ('03', 'aa'), ('12', 'ba'), ('13', 'aa')], [('02', 'aa'), ('03', 'aa'), ('12', 'ba'), ('13', 'ab')], [('02', 'aa'), ('03', 'aa'), ('12', 'ba'), ('13', 'ba')], [('02', 'aa'), ('03', 'aa'), ('12', 'ba'), ('13', 'bb')], [('02', 'aa'), ('03', 'aa'), ('12', 'bb'), ('13', 'aa')], [('02', 'aa'), ('03', 'aa'), ('12', 'bb'), ('13', 'ab')], [('02', 'aa'), ('03', 'aa'), ('12', 'bb'), ('13', 'ba')], [('02', 'aa'), ('03', 'aa'), ('12', 'bb'), ('13', 'bb')], [('02', 'aa'), ('03', 'ab'), ('12', 'aa'), ('13', 'aa')], [('02', 'aa'), ('03', 'ab'), ('12', 'aa'), ('13', 'ab')], [('02', 'aa'), ('03', 'ab'), ('12', 'aa'), ('13', 'ba')], [('02', 'aa'), ('03', 'ab'), ('12', 'aa'), ('13', 'bb')], [('02', 'aa'), ('03', 'ab'), ('12', 'ab'), ('13', 'aa')], [('02', 'aa'), ('03', 'ab'), ('12', 'ab'), ('13', 'ab')], [('02', 'aa'), ('03', 'ab'), ('12', 'ab'), ('13', 'ba')], [('02', 'aa'), ('03', 'ab'), ('12', 'ab'), ('13', 'bb')], [('02', 'aa'), ('03', 'ab'), ('12', 'ba'), ('13', 'aa')], [('02', 'aa'), ('03', 'ab'), ('12', 'ba'), ('13', 'ab')], [('02', 'aa'), ('03', 'ab'), ('12', 'ba'), ('13', 'ba')], [('02', 'aa'), ('03', 'ab'), ('12', 'ba'), ('13', 'bb')], [('02', 'aa'), ('03', 'ab'), ('12', 'bb'), ('13', 'aa')], [('02', 'aa'), ('03', 'ab'), ('12', 'bb'), ('13', 'ab')], [('02', 'aa'), ('03', 'ab'), ('12', 'bb'), ('13', 'ba')], [('02', 'aa'), ('03', 'ab'), ('12', 'bb'), ('13', 'bb')], [('02', 'aa'), ('03', 'ba'), ('12', 'aa'), ('13', 'aa')], [('02', 'aa'), ('03', 'ba'), ('12', 'aa'), ('13', 'ab')], [('02', 'aa'), ('03', 'ba'), ('12', 'aa'), ('13', 'ba')], [('02', 'aa'), ('03', 'ba'), ('12', 'aa'), ('13', 'bb')], [('02', 'aa'), ('03', 'ba'), ('12', 'ab'), ('13', 'aa')], [('02', 'aa'), ('03', 'ba'), ('12', 'ab'), ('13', 'ab')], [('02', 'aa'), ('03', 'ba'), ('12', 'ab'), ('13', 'ba')], [('02', 'aa'), ('03', 'ba'), ('12', 'ab'), ('13', 'bb')], [('02', 'aa'), ('03', 'ba'), ('12', 'ba'), ('13', 'aa')], [('02', 'aa'), ('03', 'ba'), ('12', 'ba'), ('13', 'ab')], [('02', 'aa'), ('03', 'ba'), ('12', 'ba'), ('13', 'ba')], [('02', 'aa'), ('03', 'ba'), ('12', 'ba'), ('13', 'bb')], [('02', 'aa'), ('03', 'ba'), ('12', 'bb'), ('13', 'aa')], [('02', 'aa'), ('03', 'ba'), ('12', 'bb'), ('13', 'ab')], [('02', 'aa'), ('03', 'ba'), ('12', 'bb'), ('13', 'ba')], [('02', 'aa'), ('03', 'ba'), ('12', 'bb'), ('13', 'bb')], [('02', 'aa'), ('03', 'bb'), ('12', 'aa'), ('13', 'aa')], [('02', 'aa'), ('03', 'bb'), ('12', 'aa'), ('13', 'ab')], [('02', 'aa'), ('03', 'bb'), ('12', 'aa'), ('13', 'ba')], [('02', 'aa'), ('03', 'bb'), ('12', 'aa'), ('13', 'bb')], [('02', 'aa'), ('03', 'bb'), ('12', 'ab'), ('13', 'aa')], [('02', 'aa'), ('03', 'bb'), ('12', 'ab'), ('13', 'ab')], [('02', 'aa'), ('03', 'bb'), ('12', 'ab'), ('13', 'ba')], [('02', 'aa'), ('03', 'bb'), ('12', 'ab'), ('13', 'bb')], [('02', 'aa'), ('03', 'bb'), ('12', 'ba'), ('13', 'aa')], [('02', 'aa'), ('03', 'bb'), ('12', 'ba'), ('13', 'ab')], [('02', 'aa'), ('03', 'bb'), ('12', 'ba'), ('13', 'ba')], [('02', 'aa'), ('03', 'bb'), ('12', 'ba'), ('13', 'bb')], [('02', 'aa'), ('03', 'bb'), ('12', 'bb'), ('13', 'aa')], [('02', 'aa'), ('03', 'bb'), ('12', 'bb'), ('13', 'ab')], [('02', 'aa'), ('03', 'bb'), ('12', 'bb'), ('13', 'ba')], [('02', 'aa'), ('03', 'bb'), ('12', 'bb'), ('13', 'bb')], [('02', 'ab'), ('03', 'aa'), ('12', 'aa'), ('13', 'aa')], [('02', 'ab'), ('03', 'aa'), ('12', 'aa'), ('13', 'ab')], [('02', 'ab'), ('03', 'aa'), ('12', 'aa'), ('13', 'ba')], [('02', 'ab'), ('03', 'aa'), ('12', 'aa'), ('13', 'bb')], [('02', 'ab'), ('03', 'aa'), ('12', 'ab'), ('13', 'aa')], [('02', 'ab'), ('03', 'aa'), ('12', 'ab'), ('13', 'ab')], [('02', 'ab'), ('03', 'aa'), ('12', 'ab'), ('13', 'ba')], [('02', 'ab'), ('03', 'aa'), ('12', 'ab'), ('13', 'bb')], [('02', 'ab'), ('03', 'aa'), ('12', 'ba'), ('13', 'aa')], [('02', 'ab'), ('03', 'aa'), ('12', 'ba'), ('13', 'ab')], [('02', 'ab'), ('03', 'aa'), ('12', 'ba'), ('13', 'ba')], [('02', 'ab'), ('03', 'aa'), ('12', 'ba'), ('13', 'bb')], [('02', 'ab'), ('03', 'aa'), ('12', 'bb'), ('13', 'aa')], [('02', 'ab'), ('03', 'aa'), ('12', 'bb'), ('13', 'ab')], [('02', 'ab'), ('03', 'aa'), ('12', 'bb'), ('13', 'ba')], [('02', 'ab'), ('03', 'aa'), ('12', 'bb'), ('13', 'bb')], [('02', 'ab'), ('03', 'ab'), ('12', 'aa'), ('13', 'aa')], [('02', 'ab'), ('03', 'ab'), ('12', 'aa'), ('13', 'ab')], [('02', 'ab'), ('03', 'ab'), ('12', 'aa'), ('13', 'ba')], [('02', 'ab'), ('03', 'ab'), ('12', 'aa'), ('13', 'bb')], [('02', 'ab'), ('03', 'ab'), ('12', 'ab'), ('13', 'aa')], [('02', 'ab'), ('03', 'ab'), ('12', 'ab'), ('13', 'ab')], [('02', 'ab'), ('03', 'ab'), ('12', 'ab'), ('13', 'ba')], [('02', 'ab'), ('03', 'ab'), ('12', 'ab'), ('13', 'bb')], [('02', 'ab'), ('03', 'ab'), ('12', 'ba'), ('13', 'aa')], [('02', 'ab'), ('03', 'ab'), ('12', 'ba'), ('13', 'ab')], [('02', 'ab'), ('03', 'ab'), ('12', 'ba'), ('13', 'ba')], [('02', 'ab'), ('03', 'ab'), ('12', 'ba'), ('13', 'bb')], [('02', 'ab'), ('03', 'ab'), ('12', 'bb'), ('13', 'aa')], [('02', 'ab'), ('03', 'ab'), ('12', 'bb'), ('13', 'ab')], [('02', 'ab'), ('03', 'ab'), ('12', 'bb'), ('13', 'ba')], [('02', 'ab'), ('03', 'ab'), ('12', 'bb'), ('13', 'bb')], [('02', 'ab'), ('03', 'ba'), ('12', 'aa'), ('13', 'aa')], [('02', 'ab'), ('03', 'ba'), ('12', 'aa'), ('13', 'ab')], [('02', 'ab'), ('03', 'ba'), ('12', 'aa'), ('13', 'ba')], [('02', 'ab'), ('03', 'ba'), ('12', 'aa'), ('13', 'bb')], [('02', 'ab'), ('03', 'ba'), ('12', 'ab'), ('13', 'aa')], [('02', 'ab'), ('03', 'ba'), ('12', 'ab'), ('13', 'ab')], [('02', 'ab'), ('03', 'ba'), ('12', 'ab'), ('13', 'ba')], [('02', 'ab'), ('03', 'ba'), ('12', 'ab'), ('13', 'bb')], [('02', 'ab'), ('03', 'ba'), ('12', 'ba'), ('13', 'aa')], [('02', 'ab'), ('03', 'ba'), ('12', 'ba'), ('13', 'ab')], [('02', 'ab'), ('03', 'ba'), ('12', 'ba'), ('13', 'ba')], [('02', 'ab'), ('03', 'ba'), ('12', 'ba'), ('13', 'bb')], [('02', 'ab'), ('03', 'ba'), ('12', 'bb'), ('13', 'aa')], [('02', 'ab'), ('03', 'ba'), ('12', 'bb'), ('13', 'ab')], [('02', 'ab'), ('03', 'ba'), ('12', 'bb'), ('13', 'ba')], [('02', 'ab'), ('03', 'ba'), ('12', 'bb'), ('13', 'bb')], [('02', 'ab'), ('03', 'bb'), ('12', 'aa'), ('13', 'aa')], [('02', 'ab'), ('03', 'bb'), ('12', 'aa'), ('13', 'ab')], [('02', 'ab'), ('03', 'bb'), ('12', 'aa'), ('13', 'ba')], [('02', 'ab'), ('03', 'bb'), ('12', 'aa'), ('13', 'bb')], [('02', 'ab'), ('03', 'bb'), ('12', 'ab'), ('13', 'aa')], [('02', 'ab'), ('03', 'bb'), ('12', 'ab'), ('13', 'ab')], [('02', 'ab'), ('03', 'bb'), ('12', 'ab'), ('13', 'ba')], [('02', 'ab'), ('03', 'bb'), ('12', 'ab'), ('13', 'bb')], [('02', 'ab'), ('03', 'bb'), ('12', 'ba'), ('13', 'aa')], [('02', 'ab'), ('03', 'bb'), ('12', 'ba'), ('13', 'ab')], [('02', 'ab'), ('03', 'bb'), ('12', 'ba'), ('13', 'ba')], [('02', 'ab'), ('03', 'bb'), ('12', 'ba'), ('13', 'bb')], [('02', 'ab'), ('03', 'bb'), ('12', 'bb'), ('13', 'aa')], [('02', 'ab'), ('03', 'bb'), ('12', 'bb'), ('13', 'ab')], [('02', 'ab'), ('03', 'bb'), ('12', 'bb'), ('13', 'ba')], [('02', 'ab'), ('03', 'bb'), ('12', 'bb'), ('13', 'bb')], [('02', 'ba'), ('03', 'aa'), ('12', 'aa'), ('13', 'aa')], [('02', 'ba'), ('03', 'aa'), ('12', 'aa'), ('13', 'ab')], [('02', 'ba'), ('03', 'aa'), ('12', 'aa'), ('13', 'ba')], [('02', 'ba'), ('03', 'aa'), ('12', 'aa'), ('13', 'bb')], [('02', 'ba'), ('03', 'aa'), ('12', 'ab'), ('13', 'aa')], [('02', 'ba'), ('03', 'aa'), ('12', 'ab'), ('13', 'ab')], [('02', 'ba'), ('03', 'aa'), ('12', 'ab'), ('13', 'ba')], [('02', 'ba'), ('03', 'aa'), ('12', 'ab'), ('13', 'bb')], [('02', 'ba'), ('03', 'aa'), ('12', 'ba'), ('13', 'aa')], [('02', 'ba'), ('03', 'aa'), ('12', 'ba'), ('13', 'ab')], [('02', 'ba'), ('03', 'aa'), ('12', 'ba'), ('13', 'ba')], [('02', 'ba'), ('03', 'aa'), ('12', 'ba'), ('13', 'bb')], [('02', 'ba'), ('03', 'aa'), ('12', 'bb'), ('13', 'aa')], [('02', 'ba'), ('03', 'aa'), ('12', 'bb'), ('13', 'ab')], [('02', 'ba'), ('03', 'aa'), ('12', 'bb'), ('13', 'ba')], [('02', 'ba'), ('03', 'aa'), ('12', 'bb'), ('13', 'bb')], [('02', 'ba'), ('03', 'ab'), ('12', 'aa'), ('13', 'aa')], [('02', 'ba'), ('03', 'ab'), ('12', 'aa'), ('13', 'ab')], [('02', 'ba'), ('03', 'ab'), ('12', 'aa'), ('13', 'ba')], [('02', 'ba'), ('03', 'ab'), ('12', 'aa'), ('13', 'bb')], [('02', 'ba'), ('03', 'ab'), ('12', 'ab'), ('13', 'aa')], [('02', 'ba'), ('03', 'ab'), ('12', 'ab'), ('13', 'ab')], [('02', 'ba'), ('03', 'ab'), ('12', 'ab'), ('13', 'ba')], [('02', 'ba'), ('03', 'ab'), ('12', 'ab'), ('13', 'bb')], [('02', 'ba'), ('03', 'ab'), ('12', 'ba'), ('13', 'aa')], [('02', 'ba'), ('03', 'ab'), ('12', 'ba'), ('13', 'ab')], [('02', 'ba'), ('03', 'ab'), ('12', 'ba'), ('13', 'ba')], [('02', 'ba'), ('03', 'ab'), ('12', 'ba'), ('13', 'bb')], [('02', 'ba'), ('03', 'ab'), ('12', 'bb'), ('13', 'aa')], [('02', 'ba'), ('03', 'ab'), ('12', 'bb'), ('13', 'ab')], [('02', 'ba'), ('03', 'ab'), ('12', 'bb'), ('13', 'ba')], [('02', 'ba'), ('03', 'ab'), ('12', 'bb'), ('13', 'bb')], [('02', 'ba'), ('03', 'ba'), ('12', 'aa'), ('13', 'aa')], [('02', 'ba'), ('03', 'ba'), ('12', 'aa'), ('13', 'ab')], [('02', 'ba'), ('03', 'ba'), ('12', 'aa'), ('13', 'ba')], [('02', 'ba'), ('03', 'ba'), ('12', 'aa'), ('13', 'bb')], [('02', 'ba'), ('03', 'ba'), ('12', 'ab'), ('13', 'aa')], [('02', 'ba'), ('03', 'ba'), ('12', 'ab'), ('13', 'ab')], [('02', 'ba'), ('03', 'ba'), ('12', 'ab'), ('13', 'ba')], [('02', 'ba'), ('03', 'ba'), ('12', 'ab'), ('13', 'bb')], [('02', 'ba'), ('03', 'ba'), ('12', 'ba'), ('13', 'aa')], [('02', 'ba'), ('03', 'ba'), ('12', 'ba'), ('13', 'ab')], [('02', 'ba'), ('03', 'ba'), ('12', 'ba'), ('13', 'ba')], [('02', 'ba'), ('03', 'ba'), ('12', 'ba'), ('13', 'bb')], [('02', 'ba'), ('03', 'ba'), ('12', 'bb'), ('13', 'aa')], [('02', 'ba'), ('03', 'ba'), ('12', 'bb'), ('13', 'ab')], [('02', 'ba'), ('03', 'ba'), ('12', 'bb'), ('13', 'ba')], [('02', 'ba'), ('03', 'ba'), ('12', 'bb'), ('13', 'bb')], [('02', 'ba'), ('03', 'bb'), ('12', 'aa'), ('13', 'aa')], [('02', 'ba'), ('03', 'bb'), ('12', 'aa'), ('13', 'ab')], [('02', 'ba'), ('03', 'bb'), ('12', 'aa'), ('13', 'ba')], [('02', 'ba'), ('03', 'bb'), ('12', 'aa'), ('13', 'bb')], [('02', 'ba'), ('03', 'bb'), ('12', 'ab'), ('13', 'aa')], [('02', 'ba'), ('03', 'bb'), ('12', 'ab'), ('13', 'ab')], [('02', 'ba'), ('03', 'bb'), ('12', 'ab'), ('13', 'ba')], [('02', 'ba'), ('03', 'bb'), ('12', 'ab'), ('13', 'bb')], [('02', 'ba'), ('03', 'bb'), ('12', 'ba'), ('13', 'aa')], [('02', 'ba'), ('03', 'bb'), ('12', 'ba'), ('13', 'ab')], [('02', 'ba'), ('03', 'bb'), ('12', 'ba'), ('13', 'ba')], [('02', 'ba'), ('03', 'bb'), ('12', 'ba'), ('13', 'bb')], [('02', 'ba'), ('03', 'bb'), ('12', 'bb'), ('13', 'aa')], [('02', 'ba'), ('03', 'bb'), ('12', 'bb'), ('13', 'ab')], [('02', 'ba'), ('03', 'bb'), ('12', 'bb'), ('13', 'ba')], [('02', 'ba'), ('03', 'bb'), ('12', 'bb'), ('13', 'bb')], [('02', 'bb'), ('03', 'aa'), ('12', 'aa'), ('13', 'aa')], [('02', 'bb'), ('03', 'aa'), ('12', 'aa'), ('13', 'ab')], [('02', 'bb'), ('03', 'aa'), ('12', 'aa'), ('13', 'ba')], [('02', 'bb'), ('03', 'aa'), ('12', 'aa'), ('13', 'bb')], [('02', 'bb'), ('03', 'aa'), ('12', 'ab'), ('13', 'aa')], [('02', 'bb'), ('03', 'aa'), ('12', 'ab'), ('13', 'ab')], [('02', 'bb'), ('03', 'aa'), ('12', 'ab'), ('13', 'ba')], [('02', 'bb'), ('03', 'aa'), ('12', 'ab'), ('13', 'bb')], [('02', 'bb'), ('03', 'aa'), ('12', 'ba'), ('13', 'aa')], [('02', 'bb'), ('03', 'aa'), ('12', 'ba'), ('13', 'ab')], [('02', 'bb'), ('03', 'aa'), ('12', 'ba'), ('13', 'ba')], [('02', 'bb'), ('03', 'aa'), ('12', 'ba'), ('13', 'bb')], [('02', 'bb'), ('03', 'aa'), ('12', 'bb'), ('13', 'aa')], [('02', 'bb'), ('03', 'aa'), ('12', 'bb'), ('13', 'ab')], [('02', 'bb'), ('03', 'aa'), ('12', 'bb'), ('13', 'ba')], [('02', 'bb'), ('03', 'aa'), ('12', 'bb'), ('13', 'bb')], [('02', 'bb'), ('03', 'ab'), ('12', 'aa'), ('13', 'aa')], [('02', 'bb'), ('03', 'ab'), ('12', 'aa'), ('13', 'ab')], [('02', 'bb'), ('03', 'ab'), ('12', 'aa'), ('13', 'ba')], [('02', 'bb'), ('03', 'ab'), ('12', 'aa'), ('13', 'bb')], [('02', 'bb'), ('03', 'ab'), ('12', 'ab'), ('13', 'aa')], [('02', 'bb'), ('03', 'ab'), ('12', 'ab'), ('13', 'ab')], [('02', 'bb'), ('03', 'ab'), ('12', 'ab'), ('13', 'ba')], [('02', 'bb'), ('03', 'ab'), ('12', 'ab'), ('13', 'bb')], [('02', 'bb'), ('03', 'ab'), ('12', 'ba'), ('13', 'aa')], [('02', 'bb'), ('03', 'ab'), ('12', 'ba'), ('13', 'ab')], [('02', 'bb'), ('03', 'ab'), ('12', 'ba'), ('13', 'ba')], [('02', 'bb'), ('03', 'ab'), ('12', 'ba'), ('13', 'bb')], [('02', 'bb'), ('03', 'ab'), ('12', 'bb'), ('13', 'aa')], [('02', 'bb'), ('03', 'ab'), ('12', 'bb'), ('13', 'ab')], [('02', 'bb'), ('03', 'ab'), ('12', 'bb'), ('13', 'ba')], [('02', 'bb'), ('03', 'ab'), ('12', 'bb'), ('13', 'bb')], [('02', 'bb'), ('03', 'ba'), ('12', 'aa'), ('13', 'aa')], [('02', 'bb'), ('03', 'ba'), ('12', 'aa'), ('13', 'ab')], [('02', 'bb'), ('03', 'ba'), ('12', 'aa'), ('13', 'ba')], [('02', 'bb'), ('03', 'ba'), ('12', 'aa'), ('13', 'bb')], [('02', 'bb'), ('03', 'ba'), ('12', 'ab'), ('13', 'aa')], [('02', 'bb'), ('03', 'ba'), ('12', 'ab'), ('13', 'ab')], [('02', 'bb'), ('03', 'ba'), ('12', 'ab'), ('13', 'ba')], [('02', 'bb'), ('03', 'ba'), ('12', 'ab'), ('13', 'bb')], [('02', 'bb'), ('03', 'ba'), ('12', 'ba'), ('13', 'aa')], [('02', 'bb'), ('03', 'ba'), ('12', 'ba'), ('13', 'ab')], [('02', 'bb'), ('03', 'ba'), ('12', 'ba'), ('13', 'ba')], [('02', 'bb'), ('03', 'ba'), ('12', 'ba'), ('13', 'bb')], [('02', 'bb'), ('03', 'ba'), ('12', 'bb'), ('13', 'aa')], [('02', 'bb'), ('03', 'ba'), ('12', 'bb'), ('13', 'ab')], [('02', 'bb'), ('03', 'ba'), ('12', 'bb'), ('13', 'ba')], [('02', 'bb'), ('03', 'ba'), ('12', 'bb'), ('13', 'bb')], [('02', 'bb'), ('03', 'bb'), ('12', 'aa'), ('13', 'aa')], [('02', 'bb'), ('03', 'bb'), ('12', 'aa'), ('13', 'ab')], [('02', 'bb'), ('03', 'bb'), ('12', 'aa'), ('13', 'ba')], [('02', 'bb'), ('03', 'bb'), ('12', 'aa'), ('13', 'bb')], [('02', 'bb'), ('03', 'bb'), ('12', 'ab'), ('13', 'aa')], [('02', 'bb'), ('03', 'bb'), ('12', 'ab'), ('13', 'ab')], [('02', 'bb'), ('03', 'bb'), ('12', 'ab'), ('13', 'ba')], [('02', 'bb'), ('03', 'bb'), ('12', 'ab'), ('13', 'bb')], [('02', 'bb'), ('03', 'bb'), ('12', 'ba'), ('13', 'aa')], [('02', 'bb'), ('03', 'bb'), ('12', 'ba'), ('13', 'ab')], [('02', 'bb'), ('03', 'bb'), ('12', 'ba'), ('13', 'ba')], [('02', 'bb'), ('03', 'bb'), ('12', 'ba'), ('13', 'bb')], [('02', 'bb'), ('03', 'bb'), ('12', 'bb'), ('13', 'aa')], [('02', 'bb'), ('03', 'bb'), ('12', 'bb'), ('13', 'ab')], [('02', 'bb'), ('03', 'bb'), ('12', 'bb'), ('13', 'ba')], [('02', 'bb'), ('03', 'bb'), ('12', 'bb'), ('13', 'bb')]]
log_priors = [-0.9178860550328204, -10.749415928290118, -10.749415928290118, -11.272664072079987, -10.749415928290118, -10.749415928290118, -16.95425710594061, -17.294055179550075, -10.749415928290118, -16.95425710594061, -10.749415928290118, -17.294055179550075, -11.272664072079987, -17.294055179550075, -17.294055179550075, -11.272664072079987, -10.749415928290118, -10.749415928290118, -16.95425710594061, -17.294055179550075, -10.749415928290118, -10.749415928290118, -16.95425710594061, -17.294055179550075, -16.95425710594061, -16.95425710594061, -16.95425710594061, -12.460704095246543, -17.294055179550075, -17.294055179550075, -20.83821243446749, -17.294055179550075, -10.749415928290118, -16.95425710594061, -10.749415928290118, -17.294055179550075, -16.95425710594061, -16.95425710594061, -16.95425710594061, -12.460704095246543, -10.749415928290118, -16.95425710594061, -10.749415928290118, -17.294055179550075, -17.294055179550075, -20.83821243446749, -17.294055179550075, -17.294055179550075, -11.272664072079987, -17.294055179550075, -17.294055179550075, -11.272664072079987, -17.294055179550075, -17.294055179550075, -20.83821243446749, -17.294055179550075, -17.294055179550075, -20.83821243446749, -17.294055179550075, -17.294055179550075, -11.272664072079987, -17.294055179550075, -17.294055179550075, -11.272664072079987, -10.749415928290118, -10.749415928290118, -16.95425710594061, -17.294055179550075, -10.749415928290118, -10.749415928290118, -16.95425710594061, -17.294055179550075, -16.95425710594061, -16.95425710594061, -16.95425710594061, -20.83821243446749, -17.294055179550075, -17.294055179550075, -12.460704095246543, -17.294055179550075, -10.749415928290118, -10.749415928290118, -16.95425710594061, -17.294055179550075, -10.749415928290118, -2.304180416152711, -11.272664072079987, -10.749415928290118, -16.95425710594061, -11.272664072079987, -11.272664072079987, -16.95425710594061, -17.294055179550075, -10.749415928290118, -16.95425710594061, -10.749415928290118, -16.95425710594061, -16.95425710594061, -16.95425710594061, -20.83821243446749, -16.95425710594061, -11.272664072079987, -11.272664072079987, -16.95425710594061, -16.95425710594061, -11.272664072079987, -11.272664072079987, -16.95425710594061, -20.83821243446749, -16.95425710594061, -16.95425710594061, -16.95425710594061, -17.294055179550075, -17.294055179550075, -12.460704095246543, -17.294055179550075, -17.294055179550075, -10.749415928290118, -16.95425710594061, -10.749415928290118, -20.83821243446749, -16.95425710594061, -16.95425710594061, -16.95425710594061, -17.294055179550075, -10.749415928290118, -16.95425710594061, -10.749415928290118, -10.749415928290118, -16.95425710594061, -10.749415928290118, -17.294055179550075, -16.95425710594061, -16.95425710594061, -16.95425710594061, -20.83821243446749, -10.749415928290118, -16.95425710594061, -10.749415928290118, -17.294055179550075, -17.294055179550075, -12.460704095246543, -17.294055179550075, -17.294055179550075, -16.95425710594061, -16.95425710594061, -16.95425710594061, -20.83821243446749, -16.95425710594061, -11.272664072079987, -11.272664072079987, -16.95425710594061, -16.95425710594061, -11.272664072079987, -11.272664072079987, -16.95425710594061, -20.83821243446749, -16.95425710594061, -16.95425710594061, -16.95425710594061, -10.749415928290118, -16.95425710594061, -10.749415928290118, -17.294055179550075, -16.95425710594061, -11.272664072079987, -11.272664072079987, -16.95425710594061, -10.749415928290118, -11.272664072079987, -2.304180416152711, -10.749415928290118, -17.294055179550075, -16.95425710594061, -10.749415928290118, -10.749415928290118, -17.294055179550075, -12.460704095246543, -17.294055179550075, -17.294055179550075, -20.83821243446749, -16.95425710594061, -16.95425710594061, -16.95425710594061, -17.294055179550075, -16.95425710594061, -10.749415928290118, -10.749415928290118, -17.294055179550075, -16.95425710594061, -10.749415928290118, -10.749415928290118, -11.272664072079987, -17.294055179550075, -17.294055179550075, -11.272664072079987, -17.294055179550075, -17.294055179550075, -20.83821243446749, -17.294055179550075, -17.294055179550075, -20.83821243446749, -17.294055179550075, -17.294055179550075, -11.272664072079987, -17.294055179550075, -17.294055179550075, -11.272664072079987, -17.294055179550075, -17.294055179550075, -20.83821243446749, -17.294055179550075, -17.294055179550075, -10.749415928290118, -16.95425710594061, -10.749415928290118, -12.460704095246543, -16.95425710594061, -16.95425710594061, -16.95425710594061, -17.294055179550075, -10.749415928290118, -16.95425710594061, -10.749415928290118, -17.294055179550075, -20.83821243446749, -17.294055179550075, -17.294055179550075, -12.460704095246543, -16.95425710594061, -16.95425710594061, -16.95425710594061, -17.294055179550075, -16.95425710594061, -10.749415928290118, -10.749415928290118, -17.294055179550075, -16.95425710594061, -10.749415928290118, -10.749415928290118, -11.272664072079987, -17.294055179550075, -17.294055179550075, -11.272664072079987, -17.294055179550075, -10.749415928290118, -16.95425710594061, -10.749415928290118, -17.294055179550075, -16.95425710594061, -10.749415928290118, -10.749415928290118, -11.272664072079987, -10.749415928290118, -10.749415928290118, -0.9178860550328204]
language_types = [0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 3, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 3, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0]
###Output
_____no_output_____
###Markdown
Measure the length of `possible_languages` to check whether you correctly figured out how many possible languages there should be. Using the `language_types` list, can you find the first holistic language in the list? Does it make sense that this language is classed as holistic? How does its prior probability compare to the first degenerate language in the list?If you want to see all the languages laid out along with their type and prior, you can do something like this:```pythonfor i in range(len(possible_languages)): print(possible_languages[i],language_types[i],log_priors[i])``` The rest of the code Now we have our representation of languages we can get on with the rest of the code. First we'll import our various libraries and define the usual functions we need for working with log probabilities.
###Code
import random
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg', 'pdf')
from math import log, log1p, exp
from scipy.special import logsumexp
def normalize_logprobs(logprobs):
logtotal = logsumexp(logprobs) #calculates the summed log probabilities
normedlogs = []
for logp in logprobs:
normedlogs.append(logp - logtotal) #normalise - subtracting in the log domain equivalent to divising in the normal domain
return normedlogs
def log_roulette_wheel(normedlogs):
r=log(random.random()) #generate a random number in [0,1), then convert to log
accumulator = normedlogs[0]
for i in range(len(normedlogs)):
if r < accumulator:
return i
accumulator = logsumexp([accumulator, normedlogs[i + 1]])
###Output
_____no_output_____
###Markdown
Other parametersWe first have a parameter, `error_probability`, which we can play with, as in the previous lab. This is the probability a literal speaker produces the "wrong" signal for a meaning. This is one of the ways in which languages can change and evolve over time. The learners also take this value into account when calculating the likelihood of the data they see given a particular language. In other words, learners will understand that sometimes a speaker can generate "wrong" data and therefore won't assign a dataset with the occasional error in it zero probability.The `pragmatic_speaker` parameter says whether or not the speaker will try and be a bit rational with their communication. We're not actually using the full RSA approach from the last lab, but a vastly simplified approximation. We'll go into this below. The `turnover` parameter states whether new individuals enter the population or not.
###Code
error_probability = 0.05 # Note that this is a probability, not a log probability
pragmatic_speaker = False
turnover = True
###Output
_____no_output_____
###Markdown
The learnerThe `update_posterior` function does all the work really. For this simulation we need a way of gradually learning as we go along, because when the agents are interacting, they need to use what they've learned so far to speak, but also continue to learn. Previosuly, we've done the Bayesian learning in one step: once all the data is available, for each language we calculated the likelihood and multiplied it by the prior. Now, we have to do the same, but for each sentence that the agents hear.It turns out that there's an easy trick to do this... each time the agents hear a sentence, instead of just using the prior, they instead use the posterior they calculated after the last sentence they heard. (The only exception is that if they haven't heard anything yet, they use the prior.) In this way, the posterior probability of the languages can gradually be "updated" as the agents hear data. Don't worry about this too much, but if you have some spare time you could see why this works by working out an example calculation for a few data items on a piece of paper.So, this function takes as input the current posterior, and a meaning and signal. It then works out for each language what the probability of that language generating that meaning-signal pair would be. This will be $1-\epsilon$ (where $\epsilon$ is the error probability, e.g. 0.05) if that meaning-signal pair is in the language and $\epsilon/3$ if that meaning-signal pair is not in the language. This is because the errors that the speaker might make are shared across all the signals, meaning that the probability of the correct data is slightly less than 1, and the probability of the wrong data is slightly greater than 0.Because these are log probabilities, the new posterior probability for each language is just the posterior probability for that language before, plus the likelihood (normalised so everything adds up to one).
###Code
def update_posterior(posterior, meaning, signal):
in_language = log(1 - error_probability)
out_of_language = log(error_probability / (len(signals) - 1))
new_posterior = []
for i in range(len(posterior)):
if (meaning, signal) in possible_languages[i]:
new_posterior.append(posterior[i] + in_language)
else:
new_posterior.append(posterior[i] + out_of_language)
return normalize_logprobs(new_posterior)
###Output
_____no_output_____
###Markdown
Let's check that the `update_posterior` function makes sense. Try the following:```pythonprint(log_priors[0])new_log_posterior = update_posterior(log_priors, '02', 'aa')print(new_log_posterior[0])```This essentially imagines a "newborn" agent, whose current posterior is the same as the prior (since the prior is just what you believe before seeing any data). That newborn hears the signal `aa` paired with the meaning `02` and updates their posterior as a result. What is printed is the posterior probability before and after this experience for the first language in the list, which you can see by typing:```pythonpossible_languages[0]```*Try a few other meaning-signal pairs and look at other parts of the posterior list. What would you type in to have the posterior update for a second time, as if the newborn had heard a second meaning-signal pair?* Finally, we have a function to return a specific language from the posterior by the usual probabilistic sampling process.
###Code
def sample(posterior):
selected_index = log_roulette_wheel(posterior)
return possible_languages[selected_index]
###Output
_____no_output_____
###Markdown
Production, reception, and iterated learningThe next chunk of code handles the actual iterated learning simulation.First, we have a function for a literal listener, `l0`, which takes a signal and a language and returns a meaning. If there are multiple possible meanings, it chooses one at random. Note that we're doing this a bit differently from the previous lab. Here the function is actually picking a meaning, rather than returning a set of probabilities over meanings. Conceptually, it's the same, however.
###Code
def l0(language, signal):
possibles = []
for m, s in language:
if s == signal:
possibles.append(m) # Possibles ends up with all the meanings that are mapped to the signal
if possibles == []:
return random.choice(meanings) # If we don't have any meanings for the signal, just guess!
else:
return random.choice(possibles) # Otherwise, pick one of the possible meanings
###Output
_____no_output_____
###Markdown
The literal speaker function `s0` takes a language and a meaning and returns the signal for that meaning in that language (assuming it doesn't turn out to be one of the times the speaker is making a mistake). Again, this is a little different from the last lab because we're picking a signal, rather than returning a set of probabilities.The pragmatic speaker function `s1` it does a highly simplified version of the RSA model from the last lab. It listens to the signal that it would have produced as a literal speaker and if it doesn't map back onto the right meaning it chooses another signal at random. This isn't quite as powerful as the full RSA model. Can you see why?(N.B. The learner doesn't take this fact into account when calculating the likelihood as part of our `update_posterior` function above. It's like the learner doesn't know that the speaker is trying to be helpful.)
###Code
def s0(language, meaning):
for m, s in language:
if m == meaning:
signal = s # find the signal that is mapped to the meaning
# (nb. there's no synonymy possible in this model!)
if random.random() < error_probability: # add the occasional mistake
other_signals = []
for other_signal in signals:
if other_signal != signal:
other_signals.append(other_signal) # make a list of all the "wrong" signals
return random.choice(other_signals) # pick one of them
return signal
def s1(language, meaning):
signal = s0(language, meaning)
listener_meaning = l0(language, signal) # check what a listener would think that signal would mean
if listener_meaning != meaning:
signal = random.choice(signals) # if the intended meaning is different from the received one,
# pick a different signal at random
return signal
###Output
_____no_output_____
###Markdown
*Try the receive and produce functions out to make sure they make sense, e.g. by typing: `s0(possible_languages[0], '02')` several times (or better still running it many times in a loop).* The next two functions handle the population. `new_population` creates a population of newborn agents, each with their posterior over grammars equal to the prior. `population_communication` has pairs of agents in the population communicate with each other for a certain number of rounds. As they communicate, the hearer learns (i.e. updates the posterior) from the meaning-signal pairs the speaker produces. The function returns the data (i.e. meaning-signal pairs) that was produced by all the interactions.
###Code
def new_population(popsize):
population = []
for i in range(popsize):
baby = []
for p in log_priors:
baby.append(p)
population.append(baby) # each newborn starts out with only the prior distribution
return population
def population_communication(population, rounds):
data = []
for i in range(rounds):
meaning = random.choice(meanings) # pick a meaning
speaker_index = random.randrange(len(population)) # pick a speaker
speaker_posterior = population[speaker_index]
listener_index = random.randrange(len(population) - 1) # pick a listener
if listener_index >= speaker_index: # make sure the speaker and listener are different
listener_index += 1
listener_posterior = population[listener_index]
language = sample(speaker_posterior) # sample a language from the speakers posterior
if pragmatic_speaker:
signal = s1(language, meaning) # pragmatic signal
else:
signal = s0(language, meaning) # literal signal
listener_posterior = update_posterior(listener_posterior, meaning, signal) # update the listener
data.append((meaning, signal)) # add the meaning, signal pair to the data that the function returns
return data
###Output
_____no_output_____
###Markdown
Now, we have the actual simulation function, and a wee supporting function that gives some summary statistics about the overall posterior probability for *degenerate*, *holistic*, *other*, and *compositional* languages. This is purely to make visualising the results easier!The `simulation` function takes as input a number of generations to run the simulation, the number of rounds of interaction there will be each generation, the "bottleneck" on cultural transmission (i.e. the number of meaning-signal pairs passed on to the next generation), the population size, and the language that the very first generation is going to learn from.
###Code
def language_stats(posteriors):
stats = [0., 0., 0., 0.] # degenerate, holistic, other, compositional
for p in posteriors:
for i in range(len(p)):
stats[language_types[i]] += exp(p[i]) / len(posteriors) # the stats will be the average posterior probability
# in the population. Note the conversion from log back
# to normal probabilities
return stats
def simulation(generations, rounds, bottleneck, popsize, language):
results = []
population = new_population(popsize)
data = language # the data that the first generation is trained on is just whatever language we input
for i in range(generations):
for j in range(popsize): # First off, every learner gets a chance to learn
for k in range(bottleneck): # Do a bunch of learning trials
meaning, signal = random.choice(data) # choose a meaning, signal pair at random from the previous
# generation's data
population[j] = update_posterior(population[j], meaning, signal) # learn the meaning, signal pair
data = population_communication(population, rounds) # gather data from a bunch of communication rounds
results.append(language_stats(population)) # add stats to the results
if turnover:
population = new_population(popsize) # replace the population if the turnover variable is true
return results
###Output
_____no_output_____
###Markdown
Running the simulation (at last!)We've got a handy function to plot the results of a bunch of simulation runs, which will show us the average posterior probability assigned to *degenerate*, *holistic*, and *compositional* languages on one graph.
###Code
def plot_graph(results):
average_degenerate = []
average_holistic = []
average_compositional = []
for i in range(len(results[0])):
total_degenerate = 0
total_holistic = 0
total_compositional = 0
for result in results:
total_degenerate += result[i][0]
total_holistic += result[i][1]
total_compositional += result[i][3]
average_degenerate.append(total_degenerate / len(results))
average_holistic.append(total_holistic / len(results))
average_compositional.append(total_compositional / len(results))
plt.plot(average_degenerate, color='orange', label='degenerate')
plt.plot(average_holistic, color='green', label='holistic')
plt.plot(average_compositional, color='purple', label='compositional')
plt.xlabel('generations')
plt.ylabel('proportion')
plt.legend()
plt.grid()
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the hosue table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp2.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/VA/Stephens_City/22655/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
# print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
# print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp2.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp2.house ', conn)
df[:]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['my aws']['host']
db = config['my aws']['db']
user = config['my aws']['user']
pwd = config['my aws']['password']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp13.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=4'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp13.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from gp13.indeed group by job_title order by count desc ', conn)
df.plot.bar(x='job_title')
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp10.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=2'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp10.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
View the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from gp10.indeed group by job_title order by count desc ', conn)
df.plot.bar(x='job_title')
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp14.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=0'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp14.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
View the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from gp14.indeed group by job_title order by count desc ', conn)
df.plot.bar(x='job_title')
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
**Q1.** Given the following data, build a decision tree with *three* leaves. x|y-|-0|41|52|64|100Use MSE as the mesure of quality in the nodes. That means, we have an impurity (entropy in case of classification) $$H(R)=\frac{1}{N}\sum(y_i-y_{*})^2.$$To find the minimum, we can take a derivative $$H'(R)=\frac{2}{N}\sum(y_i-y_{*})=0 =2(\frac{1}{N}\sum y_i -y_{*})\Rightarrow y_{*}=\bar{y}.$$ And quality of the split is given by$$Q=H(R)-\frac{|R_l|}{|R|}H(R_l)-\frac{|R_r|}{|R|}H(R_r)\to max.$$$$\tilde{Q}=\frac{|R_l|}{|R|}H(R_l)+\frac{|R_r|}{|R|}H(R_r)\to \min.$$
###Code
from sklearn.tree import DecisionTreeRegressor
reg = DecisionTreeRegressor(max_leaf_nodes=3, random_state=0)
X = np.array([0, 1, 2, 4]).reshape(-1,1)
y = np.array([4, 5, 6, 100])
reg.fit(X,y)
from sklearn.tree import plot_tree
plot_tree(reg, filled=True)
X_test = np.arange(0,5,0.05)
y_pred = reg.predict(X_test.reshape(-1,1))
plt.plot(X_test, y_pred)
plt.scatter(X,y, c='red')
###Output
_____no_output_____
###Markdown
Ensemble of Models Vote Bootstrap Aggregation (Bagging) Random Forest Gradient Boosting
###Code
from sklearn.datasets import make_blobs
from sklearn.datasets import make_classification
from sklearn.datasets import make_moons
#blobs = make_blobs(n_samples=400, random_state=5, n_features=2, centers=2)
#blobs = make_classification(n_samples=400, n_features=2, n_redundant=0, n_informative=2, n_clusters_per_class=1, random_state=19)
blobs = make_moons(n_samples=400, noise=0.7, random_state=56)
X = blobs[0]
y = blobs[1]
plt.scatter(X[:,0],X[:,1], c=y)
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)
clf.fit(X_train,y_train)
from mlxtend.plotting import plot_decision_regions
plot_decision_regions(X,y,clf)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=45)
clf.fit(X_train,y_train)
plot_decision_regions(X,y,clf)
###Output
/usr/local/lib/python3.7/dist-packages/mlxtend/plotting/decision_regions.py:244: MatplotlibDeprecationWarning: Passing unsupported keyword arguments to axis() will raise a TypeError in 3.3.
ax.axis(xmin=xx.min(), xmax=xx.max(), y_min=yy.min(), y_max=yy.max())
###Markdown
Maggiority vote
###Code
from sklearn.ensemble import BaggingClassifier
clf = BaggingClassifier(base_estimator=DecisionTreeClassifier(), n_estimators=20, max_samples=0.9, bootstrap=False, random_state=4).fit(X, y)
plot_decision_regions(X,y,clf)
X = np.arange(1,5,0.05)
y = np.sin(X)
y[::10] += 0.4*np.random.randn(len(y[::10]))
plt.scatter(X,y)
plt.plot(X, np.sin(X))
from sklearn.tree import DecisionTreeRegressor
classifiers = {}
for i in range(10):
X_train, X_test, y_train, y_test = train_test_split(X.reshape(-1,1), y, test_size=0.2, random_state=i)
classifiers['clf'+str(i)] = DecisionTreeRegressor(max_depth=2)
classifiers['clf'+str(i)].fit(X_train, y_train)
y_mean = np.zeros(X.shape[0])
for i in range(10):
y_tmp = classifiers['clf'+str(i)].predict(X.reshape(-1,1))
plt.plot(X, y_tmp, c='blue', alpha=0.2)
y_mean +=y_tmp
plt.plot(X, y_mean/10, c='m')
plt.plot(X, np.sin(X), c='r')
plt.show()
y_mean = np.sum(classifiers['clf'+str(i)].predict(X.reshape(-1,1)) for i in range(10))/10
###Output
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:1: DeprecationWarning: Calling np.sum(generator) is deprecated, and in the future will give a different result. Use np.sum(np.fromiter(generator)) or the python sum builtin instead.
"""Entry point for launching an IPython kernel.
###Markdown
Bias - Variance Decomposition$$Error = Bias+Variance+Noise$$Linear model has usually larger bais
###Code
from sklearn.linear_model import LinearRegression
classifiers = {}
for i in range(10):
X_train, X_test, y_train, y_test = train_test_split(X.reshape(-1,1), y, test_size=0.2, random_state=i)
classifiers['clf'+str(i)] = LinearRegression()
classifiers['clf'+str(i)].fit(X_train, y_train)
y_mean = np.zeros(X.shape[0])
for i in range(10):
y_tmp = classifiers['clf'+str(i)].predict(X.reshape(-1,1))
plt.plot(X, y_tmp, c='blue', alpha=0.2)
y_mean +=y_tmp
plt.plot(X, y_mean/10, c='m')
plt.plot(X, np.sin(X), c='r')
plt.show()
###Output
_____no_output_____
###Markdown
Variance of Bagging$$Variance(a) = \frac{1}{N} Variance (a_n) + Cov(a_n, a_m)$$Deep trees have small bias. To get smaller variance, we should take independent models.Bootstrap
###Code
from sklearn.ensemble import BaggingRegressor
clf = BaggingRegressor(base_estimator=DecisionTreeRegressor(max_depth=5), max_samples=1.0, n_estimators=10).fit(X.reshape(-1,1), y)
plt.plot(X, clf.predict(X.reshape(-1,1)))
plt.plot(X, np.sin(X), c='r')
###Output
_____no_output_____
###Markdown
**Q2.** A ML engineer has found the following observationsx|y-|-1|62|63|124|18with two trees $x>2.5$ and $x>3.5$He decided to use Bagging. For the first tree he has samples [1, 1, 2, 3] and for the secod tree [2, 3, 4, 4]. Which predictions will he obtain in the leaves minimizing MSE? Random number of features for a tree -> Random Forest
###Code
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor(n_estimators=50, max_depth=5)
rf.fit(X.reshape(-1,1), y)
plt.plot(X, rf.predict(X.reshape(-1,1)))
plt.plot(X, np.sin(X), c='r')
###Output
_____no_output_____
###Markdown
###Code
X_old = blobs[0]
y_old = blobs[1]
X_train_old, X_test_old, y_train_old, y_test_old = train_test_split(X_old, y_old, test_size=0.1, random_state=42)
from sklearn.ensemble import RandomForestClassifier
rf_clf = RandomForestClassifier(min_samples_leaf=3, max_depth=5, oob_score=True).fit(X_train_old,y_train_old) #n_estimators=20,
plot_decision_regions(X_old,y_old,rf_clf)
rf_clf.oob_score_
###Output
/usr/local/lib/python3.7/dist-packages/mlxtend/plotting/decision_regions.py:244: MatplotlibDeprecationWarning: Passing unsupported keyword arguments to axis() will raise a TypeError in 3.3.
ax.axis(xmin=xx.min(), xmax=xx.max(), y_min=yy.min(), y_max=yy.max())
###Markdown
Recommendations: max_features = $\frac{d}{3}$ for regression and $\sqrt{d}$ for classification. (See Leo Breiman. Random forests. Machine Learning, 45(1):5–32, October 2001)
###Code
# https://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_importances.html
rf_clf.feature_importances_
###Output
_____no_output_____
###Markdown
BoostingRandom Forest is the model without hyper parameters and out-of-bag validation, but there exists a better method: Gradient Boosting, which is used as final model in the commercial business.Problems:1. If we take a biased base model for Bagging, then the ensemble will be biased. Bagging fixes only variance.2. RF is time consummingAssume, we have models $a_1(x), a_2(x),\ldots, a_K(x)$ and we want to build the composition $$a(x) = \sum_{k=1}^K a_k(x).$$Fit the first model as usual (minimizing loss function):$$\frac{1}{N}\sum_{i=1}^N L(y_i, a_1(x_i)) \to min.$$To find the second model we can use $a_1(x)$:$$\frac{1}{N}\sum_{i=1}^N L(y_i, a_1(x_i)+a_2(x)) \to min$$and so on...For MSE$$\frac{1}{N}\sum_{i=1}^N (y_i - (a_1(x_i)+a_2(x)))^2 = \frac{1}{N}\sum_{i=1}^N \left((y_i - a_1(x_i)) - a_2(x))\right)^2 \to min.$$That means that we fit $a_2(x)$ on the errors of the model $a_1(x).$
###Code
X_train, X_test, y_train, y_test = train_test_split(X.reshape(-1,1), y, test_size=0.2, random_state=0)
dt_reg1 = DecisionTreeRegressor(max_depth=1).fit(X_train,y_train)
dt_reg2 = DecisionTreeRegressor(max_depth=1).fit(X_train,y_train-dt_reg1.predict(X_train))
dt_reg3 = DecisionTreeRegressor(max_depth=1).fit(X_train,y_train-dt_reg1.predict(X_train)-dt_reg2.predict(X_train))
plt.figure(figsize=(11,11))
plt.subplot(321)
plt.plot(X, dt_reg1.predict(X.reshape(-1,1)), label="$a_1(x)$", c='r')
plt.scatter(X, y, c='b')
plt.ylim([-1.5, 1.5])
plt.legend()
plt.subplot(322)
plt.scatter(X, y - dt_reg1.predict(X.reshape(-1,1)), label="$y-a_1(x)$")
plt.ylim([-1.5, 1.5])
plt.legend()
plt.subplot(323)
plt.plot(X, dt_reg2.predict(X.reshape(-1,1)), label="$a_2(x)$", c='r')
plt.scatter(X, y - dt_reg1.predict(X.reshape(-1,1)), label="$y-a_1(x)$", c='b')
plt.ylim([-1.5, 1.5])
plt.legend()
plt.subplot(324)
plt.scatter(X, y - dt_reg1.predict(X.reshape(-1,1)) -dt_reg2.predict(X.reshape(-1,1)), label="$y-a_1(x)-a_2(x)$")
plt.ylim([-1.5, 1.5])
plt.legend()
plt.subplot(325)
plt.plot(X, dt_reg3.predict(X.reshape(-1,1)), label="$a_3(x)$", c='r')
plt.scatter(X, y - dt_reg1.predict(X.reshape(-1,1)) -dt_reg2.predict(X.reshape(-1,1)), label="$y-a_1(x)-a_2(x)$")
plt.ylim([-1.5, 1.5])
plt.legend()
plt.subplot(326)
plt.plot(X, dt_reg1.predict(X.reshape(-1,1))+dt_reg2.predict(X.reshape(-1,1))+dt_reg3.predict(X.reshape(-1,1)), label="$a_1(x)+a_2(x)+a_3(x)$", c='r')
plt.scatter(X, y, c='b')
plt.ylim([-1.5, 1.5])
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
**Q3.** A ML engineer has found the following observationsx|y-|-1|62|63|124|18with two trees $x>2.5$ and $x>3.5$He decided to use Boosting with learning rate $\eta$. Which predictions he gets in the leafs minimizing the following Loss (used in xgboost)$$Q = \sum_{i=1}^{n} (y_i-a(x_i))^2 + \lambda \sum_{j=1}^{J} y_j^2,$$where $J$ is the number of leaves?* 1. $\eta=1$ and $\lambda=1$* 2. $\eta=0.5$ and $\lambda=1$*Solution 1*For the first tree $R_l$ contains $x=1$ and $2$ and $R_r$ contains $x=3$ and $4.$To find the values $y_l$ and $y_r$ in the leaves, we should solve the following optimization problem:$$Q(R_l) = (6-y_l)^2+(6-y_l)^2 + y_l^2+y_r^2\to min$$$$Q(R_r) = (12-y_r)^2+(18-y_r)^2 + y_l^2+y_r^2\to min$$Let's find the stationary point:$$\frac{\partial Q(R_l)}{\partial y_l} = -2(6-y_l)-2(6-y_l) + 2y_l =0$$$$\frac{\partial Q(R_r)}{\partial y_r} = 2(12-y_r)+2(18-y_r) + 2y_r =0$$We will get $y_l = 4$ and $y_r=10.$ Gradient Boosting (Friedman, J. H., 1999)Assume, we built first $k$ models$$y_i \approx a_1(x_i)+a_2(x_i)+\ldots +a_k(x_i).$$To find model $a_{k+1}(x)$ we should minimize$$\frac{1}{N}\sum_{i=1}^N L(y_i, a_1(x_i)+a_2(x_i)+\ldots +a_k(x_i)+a_{k+1}(x)) \to min$$If we take a look at the function $L(y_i, z),$ then for small $s$ proportional to $- \frac{\partial L(y_i,z)}{\partial z}$$$ L(y_i, z+s) \leq L(y_i, z)$$because we shift $z$ in the direction of decreasing of $L(y_i,z)$.We can use so called learning rate coeffitient $\eta$ to ensure the shift is not too large. Denote$$s_i^{(k+1)} = -\left. \frac{\partial L(y_i, z)}{\partial z}\right|_{z=a_1(x_i)+a_2(x_i)+\ldots +a_k(x_i)}.$$Then we can find the model $a_{k+1}(x)$ as MSE approximation of $s_i^{(k+1)}$$$\frac{1}{N}\sum_{i=1}^N (s_i^{(k+1)}- a_{k+1}(x))^2 \to min$$If we use a constant learning rate, the model will look like this$$a(x) = \eta\sum_{k=1}^{K} a_k(x).$$
###Code
import lightgbm as lgb
housing = pd.read_csv("https://raw.githubusercontent.com/ageron/handson-ml2/master/datasets/housing/housing.csv")
housing.head()
y = housing['median_house_value']
housing.drop(columns=['median_house_value'], inplace=True)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(housing, y, test_size=0.2, random_state=42)
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
transform = ColumnTransformer([('OneHot', OneHotEncoder(drop='first'), ['ocean_proximity'])], remainder='passthrough')
transform.fit(X_train)
X_train_hot = pd.DataFrame(transform.transform(X_train), columns=transform.get_feature_names_out())
X_test_hot = pd.DataFrame(transform.transform(X_test), columns=transform.get_feature_names_out())
X_train_hot.head()
gb = lgb.LGBMRegressor(num_leaves=31, learning_rate=0.2, n_estimators=100) #, reg_alpha=0.1, reg_lambda=0.1
gb.fit(X_train_hot, y_train, eval_set=[(X_train_hot, y_train),(X_test_hot, y_test)]) #early_stopping_rounds=6
gb.score(X_test_hot, y_test)
lgb.plot_metric(gb)
lgb.plot_importance(gb)
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp18.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=2'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp18.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from gp18.indeed group by job_title order by count desc ', conn)
df.plot.bar(x='job_title')
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp12.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=2'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp12.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
Query/View the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from gp12.indeed group by job_title order by count desc ', conn)
df.plot.bar(x='job_title')
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the house table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp12.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists gp12.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/VA/Fairfax_Station/22039/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
# print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp12.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp12.house ', conn)
df[:]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the hosue table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS demo.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/VA/Harrisonburg/22801/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
# print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
# print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into demo.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from demo.house ', conn)
df[:10]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the house table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp21.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/VA/Ashburn/20147/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
# print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
# print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp21.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp21.house ', conn)
df[:]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp4.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=2'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp4.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
cur.execute('ROLLBACK')
###Output
_____no_output_____
###Markdown
View the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from gp4.indeed group by job_title order by count desc ', conn)
df.plot.bar(x='job_title')
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the hosue table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp15.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/CA/Beverly_Hills/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
# print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
# print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp15.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp15.house ', conn)
df[:]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
Cho chuỗi s được nhập từ bàn phím, bạn hãy viết chương trình chuyển các kí tự trong chuỗi s thành in hoa và hiển thị ra màn hình.
###Code
s = input()
print(s.upper())
###Output
_____no_output_____
###Markdown
Cho chuỗi s được nhập vào từ bàn phím, bạn hãy viết chương trình tạo ra một chuỗi nối 2 kí tự đầu và 2 kí tự cuối của chuỗi s và hiển thị ra màn hình. Nếu chuỗi s có độ dài nhỏ hơn 2 thì hiển thị ra chuỗi rỗng.
###Code
s = input()
if len(s) < 2:
print("")
else:
print(s[0:2] + s[-2:])
###Output
_____no_output_____
###Markdown
Cho trước hai chuỗi s1 và s2 được nhập từ bàn phím, bạn hãy viết chương trình đổi chỗ 2 ký tự đầu tiên của s1 và s2 cho nhau. Sau đó hiển thị ra màn hình chuỗi mới có giá trị s1 + " " + s2.
###Code
s1 = input()
s2 = input()
print(s2[0:2] + s1[2:] + " " + s1[0:2] + s2[2:])
###Output
_____no_output_____
###Markdown
Cho trước chuỗi s được nhập từ bàn phím, bạn hãy viết chương trình để đảo ngược thứ tự xuất hiện của các từ trong chuỗi s và sau đó hiển thị ra màn hình chuỗi đã được xử lý.
###Code
s = str(input())
print(" ".join(s.split()[::-1]))
###Output
_____no_output_____
###Markdown
Lab 6 Import Libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
House Table
###Code
table_sql = """
CREATE TABLE IF NOT EXISTS gp5.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Define Search Region
###Code
url = 'https://www.trulia.com/VA/Roanoke/24014/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
# print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
# print (soup)
###Output
_____no_output_____
###Markdown
Insert Records into DB
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp5.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp5.house ', conn)
df[:10]
###Output
_____no_output_____
###Markdown
Basic Stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
Price Distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
Bed vs Bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
_____no_output_____
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
table_sql = """
CREATE TABLE IF NOT EXISTS gp30.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=2'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp30.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
cur.execute("ROLLBACK")
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from gp30.indeed group by job_title order by count desc ', conn)
df.plot.bar(x='job_title')
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the hosue table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp11.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/for_sale/Bowie,MD/12_zm/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
# print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
# print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp11.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp11.house ', conn)
df[:]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp11.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=2'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
#print(td_resultsCol)
###Output
_____no_output_____
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp11.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from gp11.indeed group by job_title order by count desc ', conn)
df.plot.bar(x='job_title')
cur.close()
conn.close()
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the hosue table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp14.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the bellow cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/VA/Alexandria/'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
# print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
# print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp14.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp14.house ', conn)
df[:]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
import libraries
###Code
import pandas
import configparser
import psycopg2
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
create the hosue table make sure change the schema name to your gp number
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp9.house
(
price integer,
bed integer,
bath integer,
area integer,
address VARCHAR(200),
PRIMARY KEY(address)
);
"""
###Output
_____no_output_____
###Markdown
use the below cell only if you want to delete the table
###Code
#conn.rollback()
#table_sql="drop table if exists demo.house"
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
define the search region
###Code
url = 'https://www.trulia.com/VA/Ashburn/'
###Output
_____no_output_____
###Markdown
loads webpage html into python
###Code
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
insert the records into database key to indentify information
###Code
for li_class in soup.find_all('li', class_ = 'Grid__CellBox-sc-144isrp-0 SearchResultsList__WideCell-b7y9ki-2 jiZmPM'):
try:
for price_div in li_class.find_all('div',{'data-testid':'property-price'}):
price =int(price_div.text.replace('$','').replace(",",""))
for bed_div in li_class.find_all('div', {'data-testid':'property-beds'}):
bed= int(bed_div.text.replace('bd','').replace(",",""))
for bath_div in li_class.find_all('div',{'data-testid':'property-baths'}):
bath =int(bath_div.text.replace('ba','').replace(",",""))
for area_div in li_class.find_all('div',{'data-testid':'property-floorSpace'}):
area=int(area_div.text.split('sqft')[0].replace(",",""))
for address_div in li_class.find_all('div',{'data-testid':'property-address'}):
address =address_div.text
try:
sql_insert = """
insert into gp9.house(price,bed,bath,area,address)
values('{}','{}','{}','{}','{}')
""".format(price,bed,bath,area,address)
cur.execute(sql_insert)
conn.commit()
except:
conn.rollback()
except:
pass
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select * from gp9.house ', conn)
df[:10]
###Output
_____no_output_____
###Markdown
basic stat
###Code
df.describe()
###Output
_____no_output_____
###Markdown
price distribution
###Code
df['price'].hist()
###Output
_____no_output_____
###Markdown
bed vs bath
###Code
df.plot.scatter(x='bed',y='bath')
###Output
_____no_output_____
###Markdown
Wczytanie zbioru danych
###Code
import csv
import pandas as pd
def read_csv_file_data(csv_file):
lines = []
with open(csv_file, newline='') as file:
reader = csv.reader(file, delimiter='\t')
for row in reader:
lines.append(row)
header = ['label', 'message']
return pd.DataFrame(data=lines, columns=header)
CSV_FILE_NAME = './../../data/SMS_Spam_Collection/SMSSpamCollection'
df_sms = read_csv_file_data(CSV_FILE_NAME)
print(df_sms.shape)
df_sms.head()
del CSV_FILE_NAME
###Output
_____no_output_____
###Markdown
Eksploracyjna analiza danych
###Code
ham = df_sms[df_sms['label'] == 'ham']['message']
spam = df_sms[df_sms['label'] == 'spam']['message']
print(ham.shape)
print(spam.shape)
import matplotlib.pyplot as plt
import numpy as np
def plot_class_proportions():
objects = ('ham', 'spam')
y_pos = np.arange(len(objects))
y = [ham.shape[0], spam.shape[0]]
plt.bar(y_pos, y, align='center')
plt.xticks(y_pos, objects)
plt.show()
plot_class_proportions()
p_all = df_sms.shape[0]
p_ham = (ham.shape[0] / p_all) * 100
p_spam = (spam.shape[0] / p_all) * 100
print(p_ham, '%')
print(p_spam, '%')
###Output
86.59368269921033 %
13.406317300789663 %
###Markdown
- 87 % (4825) danych stanowią dane "poprawne"- 13 % (747) danych stanowi spam
###Code
del p_ham
del p_spam
del p_all
duplicated = df_sms[df_sms.duplicated(subset=['message'], keep=False)]['message']
print(duplicated.shape)
def plot_unique_duplication_relation():
objects = ('unique', 'duplicated')
y_pos = np.arange(len(objects))
y = [df_sms.shape[0] - duplicated.shape[0], duplicated.shape[0]]
plt.bar(y_pos, y, align='center')
plt.xticks(y_pos, objects)
plt.show()
plot_unique_duplication_relation()
def count_duplicates_by_labels(_df, _ham, _spam, dup):
_dup_count = []
for i in range(0, len(dup)):
_tmp = _df[_df['message'] == dup.values[i]]
_count = _tmp.shape[0]
_labels_count = _tmp['label'].value_counts()
_is_in_all_classes = False
if _labels_count.shape != (1,):
_is_in_all_classes = True
_dup_count.append([_tmp['message'], _count, _is_in_all_classes])
header = ['message', 'duplicates', 'is_crossed']
return pd.DataFrame(data=_dup_count, columns=header)
df_l = count_duplicates_by_labels(df_sms, ham, spam, duplicated)
crossed = df_l[df_l['is_crossed'] == True]['is_crossed']
print(crossed.shape)
max_dup_no = df_l['duplicates'].max()
print(max_dup_no)
import numpy as np
import matplotlib.pyplot as plt
def plot_duplicates_occurrences(y):
y_pos = np.arange(len(y))
plt.bar(y_pos, y, align='center')
plt.title('Number of duplicates occurrences')
plt.show()
dup_no = list(set(df_l['duplicates']))
plot_duplicates_occurrences(dup_no)
most_frequent = df_l[(df_l['duplicates'] == max_dup_no)]['message']
print(most_frequent.values[0][0:1])
###Output
80 Sorry, I'll call later
Name: message, dtype: object
###Markdown
- 684 wiadomości się powtarzają (nie są unikatowe)- żadna ze zduplikowanych wiadomości nie występuje w obu klasach- największa ilość powtórzeń wiadomości zduplikowanej to 30- najczęściej powtarzającą się wiadomością zduplikowaną jest "Sorry, I'll call later"
###Code
del duplicated
del df_l
del crossed
del dup_no
del max_dup_no
del most_frequent
def compute_messages_lengths(df):
_lengths = []
for index, row in df.iterrows():
_lengths.append(len(row['message']))
return _lengths
msg_lengths = list(set(compute_messages_lengths(df_sms)))
print(min(msg_lengths))
print(max(msg_lengths))
def print_msg_filtered_by_length(df, length):
for index, row in df.iterrows():
if len(row['message']) == length:
print(row['message'])
print_msg_filtered_by_length(df_sms, 10)
###Output
Can a not?
Ok no prob
Anytime...
East coast
###Markdown
- długości wiadomości się różnią- minimalna długość wiadomości to 2 znaki- maksymalna długość tekstu to 910 znaków
###Code
del msg_lengths
def count_words(df, gc):
_counter = {}
for index, value in df.items():
_msg = str(value).split(' ')[1:-1]
for word in _msg:
if word in _counter:
_counter[word] += 1
gc[word] += 1
else:
_counter[word] = 1
gc[word] = 1
return gc, _counter
gc = {}
gc, hc = count_words(ham, gc)
gc, sc = count_words(spam, gc)
print(len(gc))
print(len(hc))
print(len(sc))
def get_most_frequent(counter_dict, entries_no=16):
sorted_dict = {k:v for k,v in sorted(counter_dict.items(), key=lambda item: item[1], reverse=True)}
_freq = {}
_no = 0
for key, value in sorted_dict.items():
_freq[key] = value
_no += 1
if _no == entries_no:
break
return _freq
def print_most_frequent(freq_dict):
for key, value in freq_dict.items():
print("{k:15}, {v}".format(k=key, v=value))
g_freq = get_most_frequent(gc)
h_freq = get_most_frequent(hc)
s_freq = get_most_frequent(sc)
print('*** GLOBAL ***')
print_most_frequent(g_freq)
print()
print('*** HAM ***')
print_most_frequent(h_freq)
print()
print('*** SPAM ***')
print_most_frequent(s_freq)
common_part = list(set(h_freq).intersection(s_freq))
complement = list(set(h_freq).difference(s_freq))
print(common_part)
print(complement)
###Output
['is', 'to', 'a', 'and', 'the', 'you', 'for']
['', 'I', 'of', 'me', 'u', 'my', 'that', 'in', 'i']
###Markdown
- najczęstsze słowa występujące w wiadomościach to: 'to', 'you', 'I', 'the', 'a', 'and', 'in', 'i', 'u', 'is', 'my', 'me', 'of', 'for', 'that', 'your'- pomiędzy klasami występują różnice odnośnie częstotliwości występowania słów
###Code
del gc
del sc
del hc
del common_part
del complement
del g_freq
del h_freq
del s_freq
del ham
del spam
###Output
_____no_output_____
###Markdown
Wstępne przetwarzanie tekstu
###Code
def eliminate_punctuation_marks(msg_str):
_puns = ['?', "'", '-', '(', ')', '[', ']', '^', '"', ',', ':', ';', '!']
_msg = str(msg_str)
for i in range(0, len(_puns)):
_msg = _msg.replace(_puns[i], '')
return _msg
def pre_process_msg_and_get(msg):
_msg = eliminate_punctuation_marks(msg)
_msg = _msg.lower()
_msg = _msg.split(' ')[1:-1]
return _msg
def rebuild_to_raw_data_and_pre_process(df):
_labels = []
_messages = []
for index, row in df.iterrows():
_labels.append(row['label'])
_messages.append(pre_process_msg_and_get(row['message']))
return _messages, _labels
messages, labels = rebuild_to_raw_data_and_pre_process(df_sms)
print(len(messages))
print(len(labels))
import nltk
nltk.download('corpora')
nltk.download('punkt')
import nltk.stem.wordnet as nsw
import nltk.stem.porter as nsp
def convert_msg_to_canonical(msg, function):
_data = []
for i in range(0, len(msg)):
_sub_data = []
for j in range(0, len(msg[i])):
_word = msg[i][j]
_canonical = function(_word)
_sub_data.append(_canonical)
_data.append(_sub_data)
return _data
def to_lemma(string):
return wnl.lemmatize(string, 'v')
def to_stem(string):
return ps.stem(string)
wnl = nsw.WordNetLemmatizer()
ps = nsp.PorterStemmer()
msg_lem = convert_msg_to_canonical(messages, to_lemma)
msg_stem = convert_msg_to_canonical(messages, to_stem)
import random as rnd
def group_indices_by_labels(_labels):
_ham = [i if _labels[i] == 'ham' else None for i in range(0, len(_labels))]
_spam = [i if _labels[i] == 'spam' else None for i in range(0, len(_labels))]
_ham = [x for x in _ham if x is not None]
_spam = [x for x in _spam if x is not None]
return _ham, _spam
def get_random_indices(indices):
msg_no = 8
randoms = indices[:]
rnd.shuffle(randoms)
return randoms[0:msg_no]
ham_indices, spam_indices = group_indices_by_labels(labels)
random_hams = get_random_indices(ham_indices)
random_spams = get_random_indices(spam_indices)
print(random_hams)
print(random_spams)
def print_lemmas_and_stems(lemmas, stems, random_idx):
for i in range(0, len(random_idx)):
ham_index = random_idx[i]
_lemma = lemmas[ham_index]
_stem = stems[ham_index]
print('l', _lemma)
print('s', _stem)
def print_comparison_lemmas_with_stems(lemmas, stems, hams, spams):
print('*** HAM ***')
print_lemmas_and_stems(lemmas, stems, hams)
print()
print('*** SPAM ***')
print_lemmas_and_stems(lemmas, stems, spams)
print_comparison_lemmas_with_stems(msg_lem, msg_stem, random_hams, random_spams)
del wnl
del ps
del random_hams
del random_spams
del messages
del msg_stem
import nltk.corpus as nc
stop_words = tuple(set(nc.stopwords.words('english')))
print(stop_words[0:8])
print(stop_words[-8:-1])
###Output
('shouldn', 'd', 'ma', 'an', 'out', 'mustn', 'you', 'had')
("it's", 't', "she's", "that'll", 'y', 'all', "needn't")
###Markdown
Wydaje się, że lista słów przestankowych jest wystarczająca, jako że zawiera takie słowa jak "ain't" (kolokwializm) czy "up" (prawdopodobnie ze względu na niedbałe "yup"). Ponadto slang przekształca się szybciej niżeli mowa wzorcowa, a i trudno odnaleźć jakiś slangowy zbiór słów.
###Code
def filter_by_stop_words(lemmas, stoppers):
_filtered = []
for sentence in lemmas:
_entry = []
for word in sentence:
if word not in stoppers:
_entry.append(word)
_filtered.append(_entry)
return _filtered
filtered_lemmas = filter_by_stop_words(msg_lem, stop_words)
prev_most_frequent_words = ['to', 'you', 'I', 'the', 'a', 'and', 'in', 'i', 'u', 'is', 'my', 'me', 'of', 'for', 'that', 'your']
freq_words_in_stop_words = [x for x in stop_words if x in prev_most_frequent_words]
complement = set(prev_most_frequent_words).difference(freq_words_in_stop_words)
print(len(prev_most_frequent_words))
print(len(freq_words_in_stop_words))
print(complement)
del stop_words
del msg_lem
del prev_most_frequent_words
del freq_words_in_stop_words
del complement
import sklearn.feature_extraction.text as skf
def lemmas_to_one_hot(lemmas, labels):
# token for accepting one-letter words
v = skf.CountVectorizer(lowercase=False, token_pattern=r"(?u)\b\w+\b")
one_hot = []
semantically_empty_sentences = 0
_labels = []
for i in range(0, len(lemmas)):
is_ok = True
try:
result = v.fit_transform(lemmas[i]).toarray()
one_hot.append(result)
except ValueError:
semantically_empty_sentences += 1
is_ok = False
if is_ok:
_labels.append(0 if labels[i] == 'ham' else 1)
return one_hot, _labels, semantically_empty_sentences
X_embeddings, y_embeddings, rejected = lemmas_to_one_hot(filtered_lemmas, labels)
print(len(X_embeddings))
print(len(y_embeddings))
print(rejected)
del rejected
del filtered_lemmas
del labels
del ham_indices
del spam_indices
del df_sms
del nc
###Output
_____no_output_____
###Markdown
Naiwna klasyfikacja Bayesa
###Code
def expand_labels(x_set, _labels):
expanded = []
for i in range(0, len(_labels)):
x_dim = x_set[i].shape[0]
_new = [_labels[i] for _ in range(0, x_dim)]
expanded.append(_new)
return expanded
y_exp = expand_labels(X_embeddings, y_embeddings)
import sklearn.naive_bayes as skb
case = 11
bayes = skb.MultinomialNB()
bayes.fit(X_embeddings[case], y_exp[case])
print(bayes.coef_)
del bayes
del case
del y_embeddings
del y_exp
del X_embeddings
!jupyter nbconvert --to pdf lab6.ipynb
del _exit_code
###Output
_____no_output_____
###Markdown
Extract Job Posts from Indeed Before extracting job posts from [Indeed](https://www.indeed.com/), make sure you have checked their [robots.txt](https://www.indeed.com/robots.txt) file. Create a table in database
###Code
import pandas
import configparser
import psycopg2
###Output
/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
###Markdown
Read the database connection info from the config.ini
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db = config['myaws']['db']
user = config['myaws']['user']
pwd = config['myaws']['pwd']
###Output
_____no_output_____
###Markdown
Establish a connection to the databas, and create a cursor.
###Code
conn = psycopg2.connect(host = host,
user = user,
password = pwd,
dbname = db
)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Design the table in SQL
###Code
# replace the schema and table name to your schema and table name
table_sql = """
CREATE TABLE IF NOT EXISTS gp32.indeed
(
id SERIAL,
job_title VARCHAR(200),
job_company VARCHAR(200),
job_loc VARCHAR(200),
job_salary VARCHAR(200),
job_summary TEXT,
PRIMARY KEY(id)
);
"""
###Output
_____no_output_____
###Markdown
create the table
###Code
cur.execute(table_sql)
conn.commit()
###Output
_____no_output_____
###Markdown
Request HTML[urllib.request](https://docs.python.org/3/library/urllib.request.html) makes simple HTTP requests to visit a web page and get the content via the Python standard library.Here we define the URL to search job pots about Intelligence analyst.
###Code
url = 'https://www.indeed.com/jobs?q=intelligence+analyst&start=2'
import urllib.request
response = urllib.request.urlopen(url)
html_data= response.read()
#print(html_data.decode('utf-8'))
###Output
_____no_output_____
###Markdown
Parese HTMLWe can use the inspector tool in browsers to analyze webpages and use [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract webpage data.pip install the beautiful soup if needed.
###Code
!pip install beautifulsoup4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_data,'html.parser')
#print (soup)
###Output
_____no_output_____
###Markdown
Use the tag.find_all(‘tag_name’, tage_attr = ‘possible_value’) function to return a list of tags where the attribute equals the possible_value.Common attributes include: id class_Common functions include: tag.text: return the visible part of the tag tag.get(‘attribute’): return the value of the attribute of the tag Since all the job posts are in the div tag class = 'jobsearch-Sprep...', we need to find that div tag from the body tag.
###Code
for table_resultsBody in soup.find_all('table', id = 'resultsBody'):
pass
#print(table_resultsBody)
for table_pageContent in table_resultsBody.find_all('table', id = 'pageContent'):
pass
#print(table_pageContent)
for td_resultsCol in table_pageContent.find_all('td', id = 'resultsCol'):
pass
print(td_resultsCol)
###Output
<td id="resultsCol">
<div id="resultsColTopSpace"></div>
<div class="messageContainer">
<script type="text/javascript">
function setRefineByCookie(refineByTypes) {
var expires = new Date();
expires.setTime(expires.getTime() + (10 * 1000));
for (var i = 0; i < refineByTypes.length; i++) {
setCookie(refineByTypes[i], "1", expires);
}
}
</script>
</div>
<style type="text/css">
#increased_radius_result {
font-size: 16px;
font-style: italic;
}
#original_radius_result{
font-size: 13px;
font-style: italic;
color: #666666;
}
</style>
<div class="resultsTop"><div class="mosaic-zone" id="mosaic-zone-aboveJobCards"><div class="mosaic mosaic-provider-serpreportjob" id="mosaic-provider-serpreportjob"><span><div class="mosaic-reportcontent-content"></div></span></div></div><script type="text/javascript">
try {
window.mosaic.onMosaicApiReady(function() {
var zoneId = 'aboveJobCards';
var providers = window.mosaic.zonedProviders[zoneId];
if (providers) {
providers.filter(function(p) { return window.mosaic.lazyFns[p]; }).forEach(function(p) {
return window.mosaic.api.loadProvider(p);
});
}
});
} catch (e) {};
</script><div data-tn-section="resumePromo" id="resumePromo">
<a aria-hidden="true" href="/promo/resume" onclick="this.href = appendParamsOnce( this.href, '?from=serptop3&subfrom=resprmrtop&trk.origin=jobsearch&trk.variant=resprmrtop&trk.tk=1ekp4ct8j0g4b000')" tabindex="-1"><span aria-label="post resume icon" class="new-ico" role="img"></span></a> <a class="resume-promo-link" href="/promo/resume" onclick="this.href = appendParamsOnce( this.href, '?from=serptop3&subfrom=resprmrtop&trk.origin=jobsearch&trk.variant=resprmrtop&trk.tk=1ekp4ct8j0g4b000')"><b>Upload your resume</b></a> - Let employers find you</div><h1 class="currentSearchLabel-a11y-contrast-color" id="jobsInLocation">
intelligence analyst jobs</h1><div class="secondRow">
<div class="serp-filters-sort-by-container">
<span class="serp-filters-sort-by-label">Sort by: </span>
<span class="no-wrap"><b>relevance</b> -
<a href="/jobs?q=intelligence+analyst&sort=date" rel="nofollow">date</a></span>
</div><div class="searchCountContainer">
<div class="searchCount-a11y-contrast-color" id="searchCount">
<div id="searchCountPages">
Page 2 of 24,256 jobs</div>
<div class="serp-relevance-explanation"><button aria-label="help icon" class="serp-relevance-explanation-helpIcon serp-helpIcon" type="button"><svg height="16" width="16" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><defs><lineargradient id="helpIcon-a" x1="50%" x2="50%" y1="0%" y2="100%"><stop offset="0%" stop-color="#FFF" stop-opacity=".5"></stop><stop offset="100%" stop-opacity=".5"></stop></lineargradient><lineargradient id="helpIcon-b" x1="50%" x2="50%" y1="0%" y2="100%"><stop offset="0%" stop-opacity=".5"></stop><stop offset="100%" stop-opacity=".5"></stop></lineargradient><path d="M7.1537 3.391C8.373 3.4665 9.3466 4.44 9.4223 5.6594 9.4886 6.7088 8.8736 7.6823 7.9 8.0702c-.1413.0563-.2358.1796-.2358.321v.6619h-1.324v-.662c0-.6894.4162-1.2944 1.0687-1.5497.4442-.1795.7283-.6244.6995-1.0968-.0382-.548-.4824-.9922-1.0304-1.0304-.3116-.0282-.605.085-.8315.2934-.2271.2077-.3504.4911-.3504.8034v.662H4.5728v-.662c0-.662.2834-1.3146.7658-1.7682.4911-.463 1.1343-.6995 1.815-.6519zM6.33 10.22c0-.368.2586-.6649.6606-.6683.004 0 .0047-.002.006-.002h.0114v.004c.412.0157.662.3064.662.6656-.0087.3736-.2566.6595-.662.667-.0013.0034-.0033.002-.0053.002-.0034 0-.006.0014-.008.0014-.0027 0-.0027-.0014-.004-.0014-.4-.0142-.6607-.2981-.6607-.6683zM1.6407 7c0-2.9554 2.4046-5.36 5.36-5.36 2.9553 0 5.36 2.4046 5.36 5.36 0 2.9554-2.4047 5.36-5.36 5.36-2.9554 0-5.36-2.4046-5.36-5.36zM.3 7c0 3.6997 3.0003 6.7 6.7 6.7 3.7004 0 6.7-3.0003 6.7-6.7C13.7 3.2996 10.7004.3 7 .3 3.3003.3.3 3.2996.3 7z" id="helpIcon-c"></path></defs><g fill="none" fill-rule="evenodd"><g fill-rule="nonzero"><path d="M8.1537 4.391c1.2194.0756 2.1929 1.0491 2.2686 2.2685.0663 1.0493-.5487 2.0228-1.5223 2.4107-.1413.0563-.2358.1796-.2358.321v.6619h-1.324v-.662c0-.6894.4162-1.2944 1.0687-1.5497.4442-.1795.7283-.6244.6995-1.0968-.0382-.548-.4824-.9922-1.0304-1.0304-.3116-.0282-.605.085-.8315.2934-.2271.2077-.3504.4911-.3504.8034v.662H5.5728v-.662c0-.662.2834-1.3146.7658-1.7682.4911-.463 1.1343-.6995 1.815-.6519zM7.33 11.22c0-.368.2586-.6649.6606-.6683.004 0 .0047-.002.006-.002h.0114v.004c.412.0157.662.3064.662.6656-.0087.3736-.2566.6595-.662.667-.0013.0034-.0033.002-.0053.002-.0034 0-.006.0014-.008.0014-.0027 0-.0027-.0014-.004-.0014-.4-.0142-.6607-.2981-.6607-.6683zM2.6407 8c0-2.9554 2.4046-5.36 5.36-5.36 2.9553 0 5.36 2.4046 5.36 5.36 0 2.9554-2.4047 5.36-5.36 5.36-2.9554 0-5.36-2.4046-5.36-5.36zM1.3 8c0 3.6997 3.0003 6.7 6.7 6.7 3.7004 0 6.7-3.0003 6.7-6.7 0-3.7004-2.9996-6.7-6.7-6.7-3.6997 0-6.7 2.9996-6.7 6.7z" fill="#D8D8D8"></path><path d="M7.1537 3.391C8.373 3.4665 9.3466 4.44 9.4223 5.6594 9.4886 6.7088 8.8736 7.6823 7.9 8.0702c-.1413.0563-.2358.1796-.2358.321v.6619h-1.324v-.662c0-.6894.4162-1.2944 1.0687-1.5497.4442-.1795.7283-.6244.6995-1.0968-.0382-.548-.4824-.9922-1.0304-1.0304-.3116-.0282-.605.085-.8315.2934-.2271.2077-.3504.4911-.3504.8034v.662H4.5728v-.662c0-.662.2834-1.3146.7658-1.7682.4911-.463 1.1343-.6995 1.815-.6519zM6.33 10.22c0-.368.2586-.6649.6606-.6683.004 0 .0047-.002.006-.002h.0114v.004c.412.0157.662.3064.662.6656-.0087.3736-.2566.6595-.662.667-.0013.0034-.0033.002-.0053.002-.0034 0-.006.0014-.008.0014-.0027 0-.0027-.0014-.004-.0014-.4-.0142-.6607-.2981-.6607-.6683zM1.6407 7c0-2.9554 2.4046-5.36 5.36-5.36 2.9553 0 5.36 2.4046 5.36 5.36 0 2.9554-2.4047 5.36-5.36 5.36-2.9554 0-5.36-2.4046-5.36-5.36zM.3 7c0 3.6997 3.0003 6.7 6.7 6.7 3.7004 0 6.7-3.0003 6.7-6.7C13.7 3.2996 10.7004.3 7 .3 3.3003.3.3 3.2996.3 7z" fill="url(#helpIcon-a)" transform="translate(1 1)"></path><path d="M7.1537 3.391C8.373 3.4665 9.3466 4.44 9.4223 5.6594 9.4886 6.7088 8.8736 7.6823 7.9 8.0702c-.1413.0563-.2358.1796-.2358.321v.6619h-1.324v-.662c0-.6894.4162-1.2944 1.0687-1.5497.4442-.1795.7283-.6244.6995-1.0968-.0382-.548-.4824-.9922-1.0304-1.0304-.3116-.0282-.605.085-.8315.2934-.2271.2077-.3504.4911-.3504.8034v.662H4.5728v-.662c0-.662.2834-1.3146.7658-1.7682.4911-.463 1.1343-.6995 1.815-.6519zM6.33 10.22c0-.368.2586-.6649.6606-.6683.004 0 .0047-.002.006-.002h.0114v.004c.412.0157.662.3064.662.6656-.0087.3736-.2566.6595-.662.667-.0013.0034-.0033.002-.0053.002-.0034 0-.006.0014-.008.0014-.0027 0-.0027-.0014-.004-.0014-.4-.0142-.6607-.2981-.6607-.6683zM1.6407 7c0-2.9554 2.4046-5.36 5.36-5.36 2.9553 0 5.36 2.4046 5.36 5.36 0 2.9554-2.4047 5.36-5.36 5.36-2.9554 0-5.36-2.4046-5.36-5.36zM.3 7c0 3.6997 3.0003 6.7 6.7 6.7 3.7004 0 6.7-3.0003 6.7-6.7C13.7 3.2996 10.7004.3 7 .3 3.3003.3.3 3.2996.3 7z" fill="url(#helpIcon-a)" transform="translate(1 1)"></path><path d="M7.1537 3.391C8.373 3.4665 9.3466 4.44 9.4223 5.6594 9.4886 6.7088 8.8736 7.6823 7.9 8.0702c-.1413.0563-.2358.1796-.2358.321v.6619h-1.324v-.662c0-.6894.4162-1.2944 1.0687-1.5497.4442-.1795.7283-.6244.6995-1.0968-.0382-.548-.4824-.9922-1.0304-1.0304-.3116-.0282-.605.085-.8315.2934-.2271.2077-.3504.4911-.3504.8034v.662H4.5728v-.662c0-.662.2834-1.3146.7658-1.7682.4911-.463 1.1343-.6995 1.815-.6519zM6.33 10.22c0-.368.2586-.6649.6606-.6683.004 0 .0047-.002.006-.002h.0114v.004c.412.0157.662.3064.662.6656-.0087.3736-.2566.6595-.662.667-.0013.0034-.0033.002-.0053.002-.0034 0-.006.0014-.008.0014-.0027 0-.0027-.0014-.004-.0014-.4-.0142-.6607-.2981-.6607-.6683zM1.6407 7c0-2.9554 2.4046-5.36 5.36-5.36 2.9553 0 5.36 2.4046 5.36 5.36 0 2.9554-2.4047 5.36-5.36 5.36-2.9554 0-5.36-2.4046-5.36-5.36zM.3 7c0 3.6997 3.0003 6.7 6.7 6.7 3.7004 0 6.7-3.0003 6.7-6.7C13.7 3.2996 10.7004.3 7 .3 3.3003.3.3 3.2996.3 7z" fill="url(#helpIcon-b)" transform="translate(1 1)"></path></g><g transform="translate(1 1)"><mask fill="#fff" id="helpIcon-d"><use xlink:href="#helpIcon-c"></use></mask><g mask="url(#helpIcon-d)"><path d="M-1-1h16v16H-1z" fill="#6F6F6F" fill-rule="nonzero"></path></g></g></g></svg></button><div class="serp-relevance-explanation-tooltip hidden"><div aria-labelledby="callout-heading-1550197793" class="icl-Callout icl-Callout--caretEnd" role="alert"><div class="icl-Callout-header"><h3 class="icl-Callout-heading" id="callout-heading-1550197793"></h3><a class="icl-CloseButton icl-Callout-close"><svg aria-label="dismiss-tooltip" class="icl-Icon icl-Icon--sm icl-Icon--black close" role="img"><g><path d="M14.53,4.53L13.47,3.47,9,7.94,4.53,3.47,3.47,4.53,7.94,9,3.47,13.47l1.06,1.06L9,10.06l4.47,4.47,1.06-1.06L10.06,9Z"></path></g></svg></a></div><div class="icl-Callout-content"><div class="jobsearch-ResultsInfo-text">Displayed here are Job Ads that match your query. Indeed may be compensated by these employers, helping keep Indeed free for jobseekers. Indeed ranks Job Ads based on a combination of employer bids and relevance, such as your search terms and other activity on Indeed. For more information, see the <a href="//www.indeed.com/legal?hl=en#tosIntro">Indeed Terms of Service</a></div></div></div></div></div></div>
</div></div>
</div>
<a id="jobPostingsAnchor" tabindex="-1"></a>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="ac4fa8ba0ed93e69" data-tn-component="organicJob" id="p_ac4fa8ba0ed93e69">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=ac4fa8ba0ed93e69&fccid=9a92a4bf81326e1e&vjs=3" id="jl_ac4fa8ba0ed93e69" onclick="setRefineByCookie([]); return rclk(this,jobmap[0],true,0);" onmousedown="return rclk(this,jobmap[0],0);" rel="noopener nofollow" target="_blank" title="Intelligence Specialist">
<b>Intelligence</b> Specialist</a>
</h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/AIG" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=ac4fa8ba0ed93e69&jcid=9a92a4bf81326e1e')" rel="noopener" target="_blank">
AIG</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/AIG/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Intelligence+Specialist&fromjk=ac4fa8ba0ed93e69&jcid=9a92a4bf81326e1e');" rel="noopener" target="_blank" title="AIG reviews">
<span class="ratingsContent">
3.7<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="Houston, TX" id="recJobLoc_ac4fa8ba0ed93e69" style="display: none"></div>
<span class="location accessible-contrast-color-location">Houston, TX</span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>Bachelor's degree required; Bachelor s Degree in History, Political Science, International Studies or related field preferred.</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">30+ days ago</span><span class="tt_set" id="tt_set_0"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('ac4fa8ba0ed93e69', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('ac4fa8ba0ed93e69', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, 'ac4fa8ba0ed93e69', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('ac4fa8ba0ed93e69');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_ac4fa8ba0ed93e69" onclick="changeJobState('ac4fa8ba0ed93e69', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_0" onclick="toggleMoreLinks('ac4fa8ba0ed93e69', '0'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_ac4fa8ba0ed93e69" style="display:none;"></div><script>if (!window['result_ac4fa8ba0ed93e69']) {window['result_ac4fa8ba0ed93e69'] = {};}window['result_ac4fa8ba0ed93e69']['showSource'] = false; window['result_ac4fa8ba0ed93e69']['source'] = "AIG"; window['result_ac4fa8ba0ed93e69']['loggedIn'] = false; window['result_ac4fa8ba0ed93e69']['showMyJobsLinks'] = false;window['result_ac4fa8ba0ed93e69']['undoAction'] = "unsave";window['result_ac4fa8ba0ed93e69']['relativeJobAge'] = "30+ days ago";window['result_ac4fa8ba0ed93e69']['jobKey'] = "ac4fa8ba0ed93e69"; window['result_ac4fa8ba0ed93e69']['myIndeedAvailable'] = true; window['result_ac4fa8ba0ed93e69']['showMoreActionsLink'] = window['result_ac4fa8ba0ed93e69']['showMoreActionsLink'] || true; window['result_ac4fa8ba0ed93e69']['resultNumber'] = 0; window['result_ac4fa8ba0ed93e69']['jobStateChangedToSaved'] = false; window['result_ac4fa8ba0ed93e69']['searchState'] = "q=intelligence analyst&start=2"; window['result_ac4fa8ba0ed93e69']['basicPermaLink'] = "https://www.indeed.com"; window['result_ac4fa8ba0ed93e69']['saveJobFailed'] = false; window['result_ac4fa8ba0ed93e69']['removeJobFailed'] = false; window['result_ac4fa8ba0ed93e69']['requestPending'] = false; window['result_ac4fa8ba0ed93e69']['notesEnabled'] = true; window['result_ac4fa8ba0ed93e69']['currentPage'] = "serp"; window['result_ac4fa8ba0ed93e69']['sponsored'] = false;window['result_ac4fa8ba0ed93e69']['reportJobButtonEnabled'] = false; window['result_ac4fa8ba0ed93e69']['showMyJobsHired'] = false; window['result_ac4fa8ba0ed93e69']['showSaveForSponsored'] = false; window['result_ac4fa8ba0ed93e69']['showJobAge'] = true; window['result_ac4fa8ba0ed93e69']['showHolisticCard'] = true; window['result_ac4fa8ba0ed93e69']['showDislike'] = true; window['result_ac4fa8ba0ed93e69']['showKebab'] = true; window['result_ac4fa8ba0ed93e69']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_0" style="display:none;"><div class="more_actions" id="more_0"><ul><li><span class="mat">View all <a href="/q-AIG-l-Houston,-TX-jobs.html">AIG jobs in Houston, TX</a> - <a href="/l-Houston,-TX-jobs.html">Houston jobs</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/AIG/about" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=ac4fa8ba0ed93e69&from=serp-more&campaignid=serp-more&jcid=9a92a4bf81326e1e');">AIG</a></span></li><li><span class="mat">See popular <a href="/cmp/AIG/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=ac4fa8ba0ed93e69&jcid=9a92a4bf81326e1e');">questions & answers about AIG</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('ac4fa8ba0ed93e69'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_ac4fa8ba0ed93e69_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="dd390e7e0b7f6c33" data-tn-component="organicJob" id="p_dd390e7e0b7f6c33">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=dd390e7e0b7f6c33&fccid=f7282ad3490137c7&vjs=3" id="jl_dd390e7e0b7f6c33" onclick="setRefineByCookie([]); return rclk(this,jobmap[1],true,0);" onmousedown="return rclk(this,jobmap[1],0);" rel="noopener nofollow" target="_blank" title="Open Source Intelligence Analyst">
Open Source <b>Intelligence</b> <b>Analyst</b></a>
<span class="new">new</span></h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/University-of-Texas-At-Austin" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=dd390e7e0b7f6c33&jcid=f7282ad3490137c7')" rel="noopener" target="_blank">
University of Texas at Austin</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/University-of-Texas-At-Austin/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Open+Source+Intelligence+Analyst&fromjk=dd390e7e0b7f6c33&jcid=f7282ad3490137c7');" rel="noopener" target="_blank">
<span class="ratingsContent">
4.3<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="Austin, TX" id="recJobLoc_dd390e7e0b7f6c33" style="display: none"></div>
<span class="location accessible-contrast-color-location">Austin, TX 78712 <span style="font-size: smaller">(University of Texas area)</span></span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>Bachelor’s degree in any discipline with three (3) years of directly related research experience OR an associate degree and five (5) years of directly related…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">Just posted</span><span class="tt_set" id="tt_set_1"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('dd390e7e0b7f6c33', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('dd390e7e0b7f6c33', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, 'dd390e7e0b7f6c33', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('dd390e7e0b7f6c33');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_dd390e7e0b7f6c33" onclick="changeJobState('dd390e7e0b7f6c33', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_1" onclick="toggleMoreLinks('dd390e7e0b7f6c33', '1'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_dd390e7e0b7f6c33" style="display:none;"></div><script>if (!window['result_dd390e7e0b7f6c33']) {window['result_dd390e7e0b7f6c33'] = {};}window['result_dd390e7e0b7f6c33']['showSource'] = false; window['result_dd390e7e0b7f6c33']['source'] = "University of Texas at Austin"; window['result_dd390e7e0b7f6c33']['loggedIn'] = false; window['result_dd390e7e0b7f6c33']['showMyJobsLinks'] = false;window['result_dd390e7e0b7f6c33']['undoAction'] = "unsave";window['result_dd390e7e0b7f6c33']['relativeJobAge'] = "Just posted";window['result_dd390e7e0b7f6c33']['jobKey'] = "dd390e7e0b7f6c33"; window['result_dd390e7e0b7f6c33']['myIndeedAvailable'] = true; window['result_dd390e7e0b7f6c33']['showMoreActionsLink'] = window['result_dd390e7e0b7f6c33']['showMoreActionsLink'] || true; window['result_dd390e7e0b7f6c33']['resultNumber'] = 1; window['result_dd390e7e0b7f6c33']['jobStateChangedToSaved'] = false; window['result_dd390e7e0b7f6c33']['searchState'] = "q=intelligence analyst&start=2"; window['result_dd390e7e0b7f6c33']['basicPermaLink'] = "https://www.indeed.com"; window['result_dd390e7e0b7f6c33']['saveJobFailed'] = false; window['result_dd390e7e0b7f6c33']['removeJobFailed'] = false; window['result_dd390e7e0b7f6c33']['requestPending'] = false; window['result_dd390e7e0b7f6c33']['notesEnabled'] = true; window['result_dd390e7e0b7f6c33']['currentPage'] = "serp"; window['result_dd390e7e0b7f6c33']['sponsored'] = false;window['result_dd390e7e0b7f6c33']['reportJobButtonEnabled'] = false; window['result_dd390e7e0b7f6c33']['showMyJobsHired'] = false; window['result_dd390e7e0b7f6c33']['showSaveForSponsored'] = false; window['result_dd390e7e0b7f6c33']['showJobAge'] = true; window['result_dd390e7e0b7f6c33']['showHolisticCard'] = true; window['result_dd390e7e0b7f6c33']['showDislike'] = true; window['result_dd390e7e0b7f6c33']['showKebab'] = true; window['result_dd390e7e0b7f6c33']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_1" style="display:none;"><div class="more_actions" id="more_1"><ul><li><span class="mat">View all <a href="/q-University-of-Texas-At-Austin-l-Austin,-TX-jobs.html">University of Texas at Austin jobs in Austin, TX</a> - <a href="/l-Austin,-TX-jobs.html">Austin jobs</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/University-of-Texas-At-Austin" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=dd390e7e0b7f6c33&from=serp-more&campaignid=serp-more&jcid=f7282ad3490137c7');">University of Texas at Austin</a></span></li><li><span class="mat">See popular <a href="/cmp/University-of-Texas-At-Austin/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=dd390e7e0b7f6c33&jcid=f7282ad3490137c7');">questions & answers about University of Texas at Austin</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('dd390e7e0b7f6c33'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_dd390e7e0b7f6c33_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="bc5eb93cf97830f1" data-tn-component="organicJob" id="p_bc5eb93cf97830f1">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=bc5eb93cf97830f1&fccid=e9870e3159e9c6ac&vjs=3" id="jl_bc5eb93cf97830f1" onclick="setRefineByCookie([]); return rclk(this,jobmap[2],true,1);" onmousedown="return rclk(this,jobmap[2],1);" rel="noopener nofollow" target="_blank" title="Graduate Studies Program - Intelligence Analyst">
Graduate Studies Program - <b>Intelligence</b> <b>Analyst</b></a>
</h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/Central-Intelligence-Agency" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=bc5eb93cf97830f1&jcid=e9870e3159e9c6ac')" rel="noopener" target="_blank">
Central <b>Intelligence</b> Agency</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/Central-Intelligence-Agency/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Graduate+Studies+Program+-+Intelligence+Analyst&fromjk=bc5eb93cf97830f1&jcid=e9870e3159e9c6ac');" rel="noopener" target="_blank" title="Central Intelligence Agency reviews">
<span class="ratingsContent">
4.3<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="Washington, DC" id="recJobLoc_bc5eb93cf97830f1" style="display: none"></div>
<span class="location accessible-contrast-color-location">Washington, DC</span>
</div>
<div class="salarySnippet holisticSalary">
<span class="salary no-wrap">
<span class="salaryText">
$26.28 - $37.51 an hour</span>
</span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>Engineering, science students, or those in other technical programs, analyze and provide written and oral assessments on challenging national security issues…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">30+ days ago</span><span class="tt_set" id="tt_set_2"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('bc5eb93cf97830f1', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('bc5eb93cf97830f1', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, 'bc5eb93cf97830f1', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('bc5eb93cf97830f1');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_bc5eb93cf97830f1" onclick="changeJobState('bc5eb93cf97830f1', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_2" onclick="toggleMoreLinks('bc5eb93cf97830f1', '2'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_bc5eb93cf97830f1" style="display:none;"></div><script>if (!window['result_bc5eb93cf97830f1']) {window['result_bc5eb93cf97830f1'] = {};}window['result_bc5eb93cf97830f1']['showSource'] = false; window['result_bc5eb93cf97830f1']['source'] = "Central Intelligence Agency"; window['result_bc5eb93cf97830f1']['loggedIn'] = false; window['result_bc5eb93cf97830f1']['showMyJobsLinks'] = false;window['result_bc5eb93cf97830f1']['undoAction'] = "unsave";window['result_bc5eb93cf97830f1']['relativeJobAge'] = "30+ days ago";window['result_bc5eb93cf97830f1']['jobKey'] = "bc5eb93cf97830f1"; window['result_bc5eb93cf97830f1']['myIndeedAvailable'] = true; window['result_bc5eb93cf97830f1']['showMoreActionsLink'] = window['result_bc5eb93cf97830f1']['showMoreActionsLink'] || true; window['result_bc5eb93cf97830f1']['resultNumber'] = 2; window['result_bc5eb93cf97830f1']['jobStateChangedToSaved'] = false; window['result_bc5eb93cf97830f1']['searchState'] = "q=intelligence analyst&start=2"; window['result_bc5eb93cf97830f1']['basicPermaLink'] = "https://www.indeed.com"; window['result_bc5eb93cf97830f1']['saveJobFailed'] = false; window['result_bc5eb93cf97830f1']['removeJobFailed'] = false; window['result_bc5eb93cf97830f1']['requestPending'] = false; window['result_bc5eb93cf97830f1']['notesEnabled'] = true; window['result_bc5eb93cf97830f1']['currentPage'] = "serp"; window['result_bc5eb93cf97830f1']['sponsored'] = false;window['result_bc5eb93cf97830f1']['reportJobButtonEnabled'] = false; window['result_bc5eb93cf97830f1']['showMyJobsHired'] = false; window['result_bc5eb93cf97830f1']['showSaveForSponsored'] = false; window['result_bc5eb93cf97830f1']['showJobAge'] = true; window['result_bc5eb93cf97830f1']['showHolisticCard'] = true; window['result_bc5eb93cf97830f1']['showDislike'] = true; window['result_bc5eb93cf97830f1']['showKebab'] = true; window['result_bc5eb93cf97830f1']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_2" style="display:none;"><div class="more_actions" id="more_2"><ul><li><span class="mat">View all <a href="/q-Central-Intelligence-Agency-l-Washington,-DC-jobs.html">Central Intelligence Agency jobs in Washington, DC</a> - <a href="/l-Washington,-DC-jobs.html">Washington jobs</a></span></li><li><span class="mat">Salary Search: <a href="/salaries/intelligence-analyst-Salaries,-Washington-DC" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=serp-more&fromjk=bc5eb93cf97830f1&from=serp-more');">Intelligence Analyst salaries in Washington, DC</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/Central-Intelligence-Agency" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=bc5eb93cf97830f1&from=serp-more&campaignid=serp-more&jcid=e9870e3159e9c6ac');">Central Intelligence Agency</a></span></li><li><span class="mat">See popular <a href="/cmp/Central-Intelligence-Agency/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=bc5eb93cf97830f1&jcid=e9870e3159e9c6ac');">questions & answers about Central Intelligence Agency</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('bc5eb93cf97830f1'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_bc5eb93cf97830f1_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="91bccbc189cce97f" data-tn-component="organicJob" id="p_91bccbc189cce97f">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=91bccbc189cce97f&fccid=65976a5ca65e4124&vjs=3" id="jl_91bccbc189cce97f" onclick="setRefineByCookie([]); return rclk(this,jobmap[3],true,0);" onmousedown="return rclk(this,jobmap[3],0);" rel="noopener nofollow" target="_blank" title="Intelligence Analyst">
<b>Intelligence</b> <b>Analyst</b></a>
</h2>
<div class="sjcl">
<div>
<span class="company">
Array Information Technology, Inc.</span>
</div>
<div class="recJobLoc" data-rc-loc="Chantilly, VA" id="recJobLoc_91bccbc189cce97f" style="display: none"></div>
<span class="location accessible-contrast-color-location">Chantilly, VA</span>
</div>
<table class="jobCardShelfContainer" role="presentation"><tr class="jobCardShelf"><td class="jobCardShelfItem indeedApply"><span class="jobCardShelfIcon"><svg fill="none" height="16" viewbox="0 0 20 20" width="16"><rect fill="#FF5A1F" height="20" rx="10" width="20"></rect><path clip-rule="evenodd" d="M15.3125 4.0625L10.8125 15.3125L7.99999 11.375L15.3125 4.0625ZM7.604 12.7576L6.875 15.3125L8.567 14.1054L7.604 12.7576ZM7.20463 10.5796L12.419 5.36525L4.0625 9.125L6.9875 10.7968L7.20463 10.5796Z" fill="white" fill-rule="evenodd"></path></svg></span><span class="iaLabel iaIconActive">Easily apply</span></td></tr></table><div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li style="margin-bottom:0px;">.Demonstrated ability to create and provide <b>intelligence</b> briefings for all levels of personnel.</li>
<li>This position will support law enforcement investigations,…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">30+ days ago</span><span class="tt_set" id="tt_set_3"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('91bccbc189cce97f', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('91bccbc189cce97f', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, '91bccbc189cce97f', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('91bccbc189cce97f');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_91bccbc189cce97f" onclick="changeJobState('91bccbc189cce97f', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_3" onclick="toggleMoreLinks('91bccbc189cce97f', '3'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_91bccbc189cce97f" style="display:none;"></div><script>if (!window['result_91bccbc189cce97f']) {window['result_91bccbc189cce97f'] = {};}window['result_91bccbc189cce97f']['showSource'] = false; window['result_91bccbc189cce97f']['source'] = "Array Information Technology, Inc."; window['result_91bccbc189cce97f']['loggedIn'] = false; window['result_91bccbc189cce97f']['showMyJobsLinks'] = false;window['result_91bccbc189cce97f']['undoAction'] = "unsave";window['result_91bccbc189cce97f']['relativeJobAge'] = "30+ days ago";window['result_91bccbc189cce97f']['jobKey'] = "91bccbc189cce97f"; window['result_91bccbc189cce97f']['myIndeedAvailable'] = true; window['result_91bccbc189cce97f']['showMoreActionsLink'] = window['result_91bccbc189cce97f']['showMoreActionsLink'] || true; window['result_91bccbc189cce97f']['resultNumber'] = 3; window['result_91bccbc189cce97f']['jobStateChangedToSaved'] = false; window['result_91bccbc189cce97f']['searchState'] = "q=intelligence analyst&start=2"; window['result_91bccbc189cce97f']['basicPermaLink'] = "https://www.indeed.com"; window['result_91bccbc189cce97f']['saveJobFailed'] = false; window['result_91bccbc189cce97f']['removeJobFailed'] = false; window['result_91bccbc189cce97f']['requestPending'] = false; window['result_91bccbc189cce97f']['notesEnabled'] = true; window['result_91bccbc189cce97f']['currentPage'] = "serp"; window['result_91bccbc189cce97f']['sponsored'] = false;window['result_91bccbc189cce97f']['reportJobButtonEnabled'] = false; window['result_91bccbc189cce97f']['showMyJobsHired'] = false; window['result_91bccbc189cce97f']['showSaveForSponsored'] = false; window['result_91bccbc189cce97f']['showJobAge'] = true; window['result_91bccbc189cce97f']['showHolisticCard'] = true; window['result_91bccbc189cce97f']['showDislike'] = true; window['result_91bccbc189cce97f']['showKebab'] = true; window['result_91bccbc189cce97f']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_3" style="display:none;"><div class="more_actions" id="more_3"><ul><li><span class="mat">View all <a href="/jobs?q=Array+Information+Technology,+Inc&l=Chantilly,+VA&nc=jasx">Array Information Technology, Inc. jobs in Chantilly, VA</a> - <a href="/l-Chantilly,-VA-jobs.html">Chantilly jobs</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('91bccbc189cce97f'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_91bccbc189cce97f_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="ec5576e08ccffe63" data-tn-component="organicJob" id="p_ec5576e08ccffe63">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=ec5576e08ccffe63&fccid=e86212ad9b1d3808&vjs=3" id="jl_ec5576e08ccffe63" onclick="setRefineByCookie([]); return rclk(this,jobmap[4],true,1);" onmousedown="return rclk(this,jobmap[4],1);" rel="noopener nofollow" target="_blank" title="Intelligence Operations Specialist">
<b>Intelligence</b> Operations Specialist</a>
<span class="new">new</span></h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/Transportation-Security-Administration" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=ec5576e08ccffe63&jcid=e86212ad9b1d3808')" rel="noopener" target="_blank">
Transportation Security Administration</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/Transportation-Security-Administration/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Intelligence+Operations+Specialist&fromjk=ec5576e08ccffe63&jcid=e86212ad9b1d3808');" rel="noopener" target="_blank" title="Transportation Security Administration reviews">
<span class="ratingsContent">
3.3<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="Colorado Springs, CO" id="recJobLoc_ec5576e08ccffe63" style="display: none"></div>
<span class="location accessible-contrast-color-location">Colorado Springs, CO</span>
</div>
<div class="salarySnippet holisticSalary">
<span class="salary no-wrap">
<span class="salaryText">
$52,700 - $99,586 a year</span>
</span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>Advanced technical knowledge of <b>intelligence</b> collection, analysis, evaluation, interpretation and operations to plan and accomplish <b>intelligence</b> assignments and…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">3 days ago</span><span class="tt_set" id="tt_set_4"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('ec5576e08ccffe63', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('ec5576e08ccffe63', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, 'ec5576e08ccffe63', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('ec5576e08ccffe63');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_ec5576e08ccffe63" onclick="changeJobState('ec5576e08ccffe63', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_4" onclick="toggleMoreLinks('ec5576e08ccffe63', '4'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_ec5576e08ccffe63" style="display:none;"></div><script>if (!window['result_ec5576e08ccffe63']) {window['result_ec5576e08ccffe63'] = {};}window['result_ec5576e08ccffe63']['showSource'] = false; window['result_ec5576e08ccffe63']['source'] = "Transportation Security Administration"; window['result_ec5576e08ccffe63']['loggedIn'] = false; window['result_ec5576e08ccffe63']['showMyJobsLinks'] = false;window['result_ec5576e08ccffe63']['undoAction'] = "unsave";window['result_ec5576e08ccffe63']['relativeJobAge'] = "3 days ago";window['result_ec5576e08ccffe63']['jobKey'] = "ec5576e08ccffe63"; window['result_ec5576e08ccffe63']['myIndeedAvailable'] = true; window['result_ec5576e08ccffe63']['showMoreActionsLink'] = window['result_ec5576e08ccffe63']['showMoreActionsLink'] || true; window['result_ec5576e08ccffe63']['resultNumber'] = 4; window['result_ec5576e08ccffe63']['jobStateChangedToSaved'] = false; window['result_ec5576e08ccffe63']['searchState'] = "q=intelligence analyst&start=2"; window['result_ec5576e08ccffe63']['basicPermaLink'] = "https://www.indeed.com"; window['result_ec5576e08ccffe63']['saveJobFailed'] = false; window['result_ec5576e08ccffe63']['removeJobFailed'] = false; window['result_ec5576e08ccffe63']['requestPending'] = false; window['result_ec5576e08ccffe63']['notesEnabled'] = true; window['result_ec5576e08ccffe63']['currentPage'] = "serp"; window['result_ec5576e08ccffe63']['sponsored'] = false;window['result_ec5576e08ccffe63']['reportJobButtonEnabled'] = false; window['result_ec5576e08ccffe63']['showMyJobsHired'] = false; window['result_ec5576e08ccffe63']['showSaveForSponsored'] = false; window['result_ec5576e08ccffe63']['showJobAge'] = true; window['result_ec5576e08ccffe63']['showHolisticCard'] = true; window['result_ec5576e08ccffe63']['showDislike'] = true; window['result_ec5576e08ccffe63']['showKebab'] = true; window['result_ec5576e08ccffe63']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_4" style="display:none;"><div class="more_actions" id="more_4"><ul><li><span class="mat">View all <a href="/q-Transportation-Security-Administration-l-Colorado-Springs,-CO-jobs.html">Transportation Security Administration jobs in Colorado Springs, CO</a> - <a href="/l-Colorado-Springs,-CO-jobs.html">Colorado Springs jobs</a></span></li><li><span class="mat">Salary Search: <a href="/salaries/intelligence-specialist-Salaries,-Colorado-Springs-CO" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=serp-more&fromjk=ec5576e08ccffe63&from=serp-more');">Intelligence Specialist salaries in Colorado Springs, CO</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/Transportation-Security-Administration/about" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=ec5576e08ccffe63&from=serp-more&campaignid=serp-more&jcid=e86212ad9b1d3808');">Transportation Security Administration</a></span></li><li><span class="mat">See popular <a href="/cmp/Transportation-Security-Administration/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=ec5576e08ccffe63&jcid=e86212ad9b1d3808');">questions & answers about Transportation Security Administration</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('ec5576e08ccffe63'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_ec5576e08ccffe63_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="7503f6cddae7cbd3" data-tn-component="organicJob" id="p_7503f6cddae7cbd3">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=7503f6cddae7cbd3&fccid=64e4cdd7435d8c42&vjs=3" id="jl_7503f6cddae7cbd3" onclick="setRefineByCookie([]); return rclk(this,jobmap[5],true,0);" onmousedown="return rclk(this,jobmap[5],0);" rel="noopener nofollow" target="_blank" title="Intelligence Analyst (Remote)">
<b>Intelligence</b> <b>Analyst</b> (Remote)</a>
</h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/Crowdstrike" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=7503f6cddae7cbd3&jcid=bf94d2bbe4f483e0')" rel="noopener" target="_blank">
CrowdStrike</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/Crowdstrike/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Intelligence+Analyst+%28Remote%29&fromjk=7503f6cddae7cbd3&jcid=bf94d2bbe4f483e0');" rel="noopener" target="_blank" title="Crowdstrike reviews">
<span class="ratingsContent">
2.8<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="United States" id="recJobLoc_7503f6cddae7cbd3" style="display: none"></div>
<span class="location accessible-contrast-color-location">United States</span>
<span class="remote-bullet">•</span>
<span class="remote">Remote</span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>Undergraduate degree, military training or relevant experience in cyber <b>intelligence</b>, computer science, general <b>intelligence</b> studies, security studies,…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">8 days ago</span><span class="tt_set" id="tt_set_5"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('7503f6cddae7cbd3', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('7503f6cddae7cbd3', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, '7503f6cddae7cbd3', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('7503f6cddae7cbd3');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_7503f6cddae7cbd3" onclick="changeJobState('7503f6cddae7cbd3', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_5" onclick="toggleMoreLinks('7503f6cddae7cbd3', '5'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_7503f6cddae7cbd3" style="display:none;"></div><script>if (!window['result_7503f6cddae7cbd3']) {window['result_7503f6cddae7cbd3'] = {};}window['result_7503f6cddae7cbd3']['showSource'] = false; window['result_7503f6cddae7cbd3']['source'] = "CrowdStrike"; window['result_7503f6cddae7cbd3']['loggedIn'] = false; window['result_7503f6cddae7cbd3']['showMyJobsLinks'] = false;window['result_7503f6cddae7cbd3']['undoAction'] = "unsave";window['result_7503f6cddae7cbd3']['relativeJobAge'] = "8 days ago";window['result_7503f6cddae7cbd3']['jobKey'] = "7503f6cddae7cbd3"; window['result_7503f6cddae7cbd3']['myIndeedAvailable'] = true; window['result_7503f6cddae7cbd3']['showMoreActionsLink'] = window['result_7503f6cddae7cbd3']['showMoreActionsLink'] || true; window['result_7503f6cddae7cbd3']['resultNumber'] = 5; window['result_7503f6cddae7cbd3']['jobStateChangedToSaved'] = false; window['result_7503f6cddae7cbd3']['searchState'] = "q=intelligence analyst&start=2"; window['result_7503f6cddae7cbd3']['basicPermaLink'] = "https://www.indeed.com"; window['result_7503f6cddae7cbd3']['saveJobFailed'] = false; window['result_7503f6cddae7cbd3']['removeJobFailed'] = false; window['result_7503f6cddae7cbd3']['requestPending'] = false; window['result_7503f6cddae7cbd3']['notesEnabled'] = true; window['result_7503f6cddae7cbd3']['currentPage'] = "serp"; window['result_7503f6cddae7cbd3']['sponsored'] = false;window['result_7503f6cddae7cbd3']['reportJobButtonEnabled'] = false; window['result_7503f6cddae7cbd3']['showMyJobsHired'] = false; window['result_7503f6cddae7cbd3']['showSaveForSponsored'] = false; window['result_7503f6cddae7cbd3']['showJobAge'] = true; window['result_7503f6cddae7cbd3']['showHolisticCard'] = true; window['result_7503f6cddae7cbd3']['showDislike'] = true; window['result_7503f6cddae7cbd3']['showKebab'] = true; window['result_7503f6cddae7cbd3']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_5" style="display:none;"><div class="more_actions" id="more_5"><ul><li><span class="mat">View all <a href="/q-Crowdstrike-l-United-States-jobs.html">CrowdStrike jobs in United States</a> - <a href="/l-United-States-jobs.html">United States jobs</a></span></li><li><span class="mat">Salary Search: <a href="/salaries/intelligence-analyst-Salaries,-US" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=serp-more&fromjk=7503f6cddae7cbd3&from=serp-more');">Intelligence Analyst salaries in United States</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/Crowdstrike/about" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=7503f6cddae7cbd3&from=serp-more&campaignid=serp-more&jcid=bf94d2bbe4f483e0');">CrowdStrike</a></span></li><li><span class="mat">See popular <a href="/cmp/Crowdstrike/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=7503f6cddae7cbd3&jcid=bf94d2bbe4f483e0');">questions & answers about CrowdStrike</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('7503f6cddae7cbd3'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_7503f6cddae7cbd3_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="517f132220bfa6eb" data-tn-component="organicJob" id="p_517f132220bfa6eb">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=517f132220bfa6eb&fccid=ca509b164585637a&vjs=3" id="jl_517f132220bfa6eb" onclick="setRefineByCookie([]); return rclk(this,jobmap[6],true,1);" onmousedown="return rclk(this,jobmap[6],1);" rel="noopener nofollow" target="_blank" title="INTELLIGENCE OPERATIONS SPECIALIST">
<b>INTELLIGENCE</b> OPERATIONS SPECIALIST</a>
<span class="new">new</span></h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/United-States-Department-of-Defense" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=517f132220bfa6eb&jcid=40701b7564b5f676')" rel="noopener" target="_blank">
US Department of Defense</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/United-States-Department-of-Defense/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=INTELLIGENCE+OPERATIONS+SPECIALIST&fromjk=517f132220bfa6eb&jcid=40701b7564b5f676');" rel="noopener" target="_blank" title="US Department of Defense reviews">
<span class="ratingsContent">
4.2<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="Andover, MA" id="recJobLoc_517f132220bfa6eb" style="display: none"></div>
<span class="location accessible-contrast-color-location">Andover, MA</span>
<span>
<a class="more_loc" href="/addlLoc/redirect?tk=1ekp4ct8j0g4b000&jk=517f132220bfa6eb&dest=%2Fjobs%3Fq%3Dintelligence%2Banalyst%26rbt%3DINTELLIGENCE%2BOPERATIONS%2BSPECIALIST%26rbc%3DUS%2BDepartment%2Bof%2BDefense%26jtid%3Df2e507bb9313d71c%26jcid%3D40701b7564b5f676%26grp%3Dtcl" onmousedown="ptk('addlloc');" rel="nofollow">
+2 locations</a>
</span>
<span class="remote-bullet">•</span>
<span class="remote">Remote</span>
</div>
<div class="salarySnippet holisticSalary">
<span class="salary no-wrap">
<span class="salaryText">
$101,585 - $132,064 a year</span>
</span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li style="margin-bottom:0px;">Substitution of education may not be used in lieu of specialized experience for this grade level.</li>
<li>DD214 showing character of service, SF-15 Form and VA letter…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">7 days ago</span><span class="tt_set" id="tt_set_6"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('517f132220bfa6eb', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('517f132220bfa6eb', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, '517f132220bfa6eb', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('517f132220bfa6eb');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_517f132220bfa6eb" onclick="changeJobState('517f132220bfa6eb', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_6" onclick="toggleMoreLinks('517f132220bfa6eb', '6'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_517f132220bfa6eb" style="display:none;"></div><script>if (!window['result_517f132220bfa6eb']) {window['result_517f132220bfa6eb'] = {};}window['result_517f132220bfa6eb']['showSource'] = false; window['result_517f132220bfa6eb']['source'] = "usajobs.gov"; window['result_517f132220bfa6eb']['loggedIn'] = false; window['result_517f132220bfa6eb']['showMyJobsLinks'] = false;window['result_517f132220bfa6eb']['undoAction'] = "unsave";window['result_517f132220bfa6eb']['relativeJobAge'] = "7 days ago";window['result_517f132220bfa6eb']['jobKey'] = "517f132220bfa6eb"; window['result_517f132220bfa6eb']['myIndeedAvailable'] = true; window['result_517f132220bfa6eb']['showMoreActionsLink'] = window['result_517f132220bfa6eb']['showMoreActionsLink'] || true; window['result_517f132220bfa6eb']['resultNumber'] = 6; window['result_517f132220bfa6eb']['jobStateChangedToSaved'] = false; window['result_517f132220bfa6eb']['searchState'] = "q=intelligence analyst&start=2"; window['result_517f132220bfa6eb']['basicPermaLink'] = "https://www.indeed.com"; window['result_517f132220bfa6eb']['saveJobFailed'] = false; window['result_517f132220bfa6eb']['removeJobFailed'] = false; window['result_517f132220bfa6eb']['requestPending'] = false; window['result_517f132220bfa6eb']['notesEnabled'] = true; window['result_517f132220bfa6eb']['currentPage'] = "serp"; window['result_517f132220bfa6eb']['sponsored'] = false;window['result_517f132220bfa6eb']['reportJobButtonEnabled'] = false; window['result_517f132220bfa6eb']['showMyJobsHired'] = false; window['result_517f132220bfa6eb']['showSaveForSponsored'] = false; window['result_517f132220bfa6eb']['showJobAge'] = true; window['result_517f132220bfa6eb']['showHolisticCard'] = true; window['result_517f132220bfa6eb']['showDislike'] = true; window['result_517f132220bfa6eb']['showKebab'] = true; window['result_517f132220bfa6eb']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_6" style="display:none;"><div class="more_actions" id="more_6"><ul><li><span class="mat">View all <a href="/q-US-Department-of-Defense-l-Andover,-MA-jobs.html">US Department of Defense jobs in Andover, MA</a> - <a href="/l-Andover,-MA-jobs.html">Andover jobs</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/United-States-Department-of-Defense/about" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=517f132220bfa6eb&from=serp-more&campaignid=serp-more&jcid=40701b7564b5f676');">US Department of Defense</a></span></li><li><span class="mat">See popular <a href="/cmp/United-States-Department-of-Defense/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=517f132220bfa6eb&jcid=40701b7564b5f676');">questions & answers about US Department of Defense</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('517f132220bfa6eb'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_517f132220bfa6eb_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="7027ddae82aea145" data-tn-component="organicJob" id="p_7027ddae82aea145">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=7027ddae82aea145&fccid=f1b5b95bc792ac3a&vjs=3" id="jl_7027ddae82aea145" onclick="setRefineByCookie([]); return rclk(this,jobmap[7],true,0);" onmousedown="return rclk(this,jobmap[7],0);" rel="noopener nofollow" target="_blank" title="Intelligence Analyst">
<b>Intelligence</b> <b>Analyst</b></a>
<span class="new">new</span></h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/Halfaker-and-Associates" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=7027ddae82aea145&jcid=f1b5b95bc792ac3a')" rel="noopener" target="_blank">
Halfaker and Associates</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/Halfaker-and-Associates/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Intelligence+Analyst&fromjk=7027ddae82aea145&jcid=f1b5b95bc792ac3a');" rel="noopener" target="_blank" title="Halfaker and Associates reviews">
<span class="ratingsContent">
3.8<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="Washington, DC" id="recJobLoc_7027ddae82aea145" style="display: none"></div>
<span class="location accessible-contrast-color-location">Washington, DC</span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>Must also demonstrate the ability to conduct research, analysis, and technical writing skills and be able to perform triage on questions, issues, or events…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">2 days ago</span><span class="tt_set" id="tt_set_7"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('7027ddae82aea145', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('7027ddae82aea145', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, '7027ddae82aea145', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('7027ddae82aea145');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_7027ddae82aea145" onclick="changeJobState('7027ddae82aea145', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_7" onclick="toggleMoreLinks('7027ddae82aea145', '7'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_7027ddae82aea145" style="display:none;"></div><script>if (!window['result_7027ddae82aea145']) {window['result_7027ddae82aea145'] = {};}window['result_7027ddae82aea145']['showSource'] = false; window['result_7027ddae82aea145']['source'] = "Halfaker and Associates"; window['result_7027ddae82aea145']['loggedIn'] = false; window['result_7027ddae82aea145']['showMyJobsLinks'] = false;window['result_7027ddae82aea145']['undoAction'] = "unsave";window['result_7027ddae82aea145']['relativeJobAge'] = "2 days ago";window['result_7027ddae82aea145']['jobKey'] = "7027ddae82aea145"; window['result_7027ddae82aea145']['myIndeedAvailable'] = true; window['result_7027ddae82aea145']['showMoreActionsLink'] = window['result_7027ddae82aea145']['showMoreActionsLink'] || true; window['result_7027ddae82aea145']['resultNumber'] = 7; window['result_7027ddae82aea145']['jobStateChangedToSaved'] = false; window['result_7027ddae82aea145']['searchState'] = "q=intelligence analyst&start=2"; window['result_7027ddae82aea145']['basicPermaLink'] = "https://www.indeed.com"; window['result_7027ddae82aea145']['saveJobFailed'] = false; window['result_7027ddae82aea145']['removeJobFailed'] = false; window['result_7027ddae82aea145']['requestPending'] = false; window['result_7027ddae82aea145']['notesEnabled'] = true; window['result_7027ddae82aea145']['currentPage'] = "serp"; window['result_7027ddae82aea145']['sponsored'] = false;window['result_7027ddae82aea145']['reportJobButtonEnabled'] = false; window['result_7027ddae82aea145']['showMyJobsHired'] = false; window['result_7027ddae82aea145']['showSaveForSponsored'] = false; window['result_7027ddae82aea145']['showJobAge'] = true; window['result_7027ddae82aea145']['showHolisticCard'] = true; window['result_7027ddae82aea145']['showDislike'] = true; window['result_7027ddae82aea145']['showKebab'] = true; window['result_7027ddae82aea145']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_7" style="display:none;"><div class="more_actions" id="more_7"><ul><li><span class="mat">View all <a href="/q-Halfaker-Associates-l-Washington,-DC-jobs.html">Halfaker and Associates jobs in Washington, DC</a> - <a href="/l-Washington,-DC-jobs.html">Washington jobs</a></span></li><li><span class="mat">Salary Search: <a href="/salaries/intelligence-analyst-Salaries,-Washington-DC" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=serp-more&fromjk=7027ddae82aea145&from=serp-more');">Intelligence Analyst salaries in Washington, DC</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/Halfaker-and-Associates" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=7027ddae82aea145&from=serp-more&campaignid=serp-more&jcid=f1b5b95bc792ac3a');">Halfaker and Associates</a></span></li><li><span class="mat">See popular <a href="/cmp/Halfaker-and-Associates/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=7027ddae82aea145&jcid=f1b5b95bc792ac3a');">questions & answers about Halfaker and Associates</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('7027ddae82aea145'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_7027ddae82aea145_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="3a260a2872349bb7" data-tn-component="organicJob" id="p_3a260a2872349bb7">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/company/Data-Driven/jobs/Looker-Data-Analyst-3a260a2872349bb7?fccid=6b66804616eb6c58&vjs=3" id="jl_3a260a2872349bb7" onclick="setRefineByCookie([]); return rclk(this,jobmap[8],true,1);" onmousedown="return rclk(this,jobmap[8],1);" rel="noopener nofollow" target="_blank" title="Looker Data Analyst (Fully Remote)">
Looker Data <b>Analyst</b> (Fully Remote)</a>
<span class="new">new</span></h2>
<div class="sjcl">
<div>
<span class="company">
Data Driven</span>
</div>
<div class="recJobLoc" data-rc-loc="United States" id="recJobLoc_3a260a2872349bb7" style="display: none"></div>
<span class="location accessible-contrast-color-location">United States</span>
</div>
<div class="salarySnippet holisticSalary">
<span class="salary no-wrap">
<span class="salaryText">
$55,000 - $70,000 a year</span>
</span>
</div>
<table class="jobCardShelfContainer" role="presentation"><tr class="jobCardShelf"><td class="jobCardShelfItem indeedApply"><span class="jobCardShelfIcon"><svg fill="none" height="16" viewbox="0 0 20 20" width="16"><rect fill="#FF5A1F" height="20" rx="10" width="20"></rect><path clip-rule="evenodd" d="M15.3125 4.0625L10.8125 15.3125L7.99999 11.375L15.3125 4.0625ZM7.604 12.7576L6.875 15.3125L8.567 14.1054L7.604 12.7576ZM7.20463 10.5796L12.419 5.36525L4.0625 9.125L6.9875 10.7968L7.20463 10.5796Z" fill="white" fill-rule="evenodd"></path></svg></span><span class="iaLabel iaIconActive">Easily apply</span></td></tr></table><div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li style="margin-bottom:0px;">You have prior experience with a business <b>intelligence</b> tool, and it’s a plus if you have prior experience with Looker.</li>
<li>Only full-time employees eligible.</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">2 days ago</span><span class="tt_set" id="tt_set_8"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('3a260a2872349bb7', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('3a260a2872349bb7', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, '3a260a2872349bb7', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('3a260a2872349bb7');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_3a260a2872349bb7" onclick="changeJobState('3a260a2872349bb7', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_8" onclick="toggleMoreLinks('3a260a2872349bb7', '8'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_3a260a2872349bb7" style="display:none;"></div><script>if (!window['result_3a260a2872349bb7']) {window['result_3a260a2872349bb7'] = {};}window['result_3a260a2872349bb7']['showSource'] = false; window['result_3a260a2872349bb7']['source'] = "Indeed"; window['result_3a260a2872349bb7']['loggedIn'] = false; window['result_3a260a2872349bb7']['showMyJobsLinks'] = false;window['result_3a260a2872349bb7']['undoAction'] = "unsave";window['result_3a260a2872349bb7']['relativeJobAge'] = "2 days ago";window['result_3a260a2872349bb7']['jobKey'] = "3a260a2872349bb7"; window['result_3a260a2872349bb7']['myIndeedAvailable'] = true; window['result_3a260a2872349bb7']['showMoreActionsLink'] = window['result_3a260a2872349bb7']['showMoreActionsLink'] || true; window['result_3a260a2872349bb7']['resultNumber'] = 8; window['result_3a260a2872349bb7']['jobStateChangedToSaved'] = false; window['result_3a260a2872349bb7']['searchState'] = "q=intelligence analyst&start=2"; window['result_3a260a2872349bb7']['basicPermaLink'] = "https://www.indeed.com"; window['result_3a260a2872349bb7']['saveJobFailed'] = false; window['result_3a260a2872349bb7']['removeJobFailed'] = false; window['result_3a260a2872349bb7']['requestPending'] = false; window['result_3a260a2872349bb7']['notesEnabled'] = true; window['result_3a260a2872349bb7']['currentPage'] = "serp"; window['result_3a260a2872349bb7']['sponsored'] = false;window['result_3a260a2872349bb7']['reportJobButtonEnabled'] = false; window['result_3a260a2872349bb7']['showMyJobsHired'] = false; window['result_3a260a2872349bb7']['showSaveForSponsored'] = false; window['result_3a260a2872349bb7']['showJobAge'] = true; window['result_3a260a2872349bb7']['showHolisticCard'] = true; window['result_3a260a2872349bb7']['showDislike'] = true; window['result_3a260a2872349bb7']['showKebab'] = true; window['result_3a260a2872349bb7']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_8" style="display:none;"><div class="more_actions" id="more_8"><ul><li><span class="mat">View all <a href="/q-Data-Driven-l-United-States-jobs.html">Data Driven jobs in United States</a> - <a href="/l-United-States-jobs.html">United States jobs</a></span></li><li><span class="mat">Salary Search: <a href="/salaries/data-analyst-Salaries,-US" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=serp-more&fromjk=3a260a2872349bb7&from=serp-more');">Data Analyst salaries in United States</a></span></li><li><span class="mat">Explore career as Data Analyst: <a href="/career/data-analyst" onmousedown="this.href = appendParamsOnce(this.href, 'from=jasx');">overview</a>, <a href="/career/data-analyst/career-advice" onmousedown="this.href = appendParamsOnce(this.href, 'from=jasx');">career advice</a>, <a href="/career/data-analyst/faq" onmousedown="this.href = appendParamsOnce(this.href, 'from=jasx');">FAQs</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('3a260a2872349bb7'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_3a260a2872349bb7_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="6c30c593ca66e0df" data-tn-component="organicJob" id="p_6c30c593ca66e0df">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=6c30c593ca66e0df&fccid=fa46558dfda4e0fa&vjs=3" id="jl_6c30c593ca66e0df" onclick="setRefineByCookie([]); return rclk(this,jobmap[9],true,0);" onmousedown="return rclk(this,jobmap[9],0);" rel="noopener nofollow" target="_blank" title="Language-enabled Open Source Intelligence Analyst (OSINT)">
Language-enabled Open Source <b>Intelligence</b> <b>Analyst</b> (OSINT)</a>
</h2>
<div class="sjcl">
<div>
<span class="company">
DarkStar <b>Intelligence</b></span>
</div>
<div class="recJobLoc" data-rc-loc="Springfield, VA" id="recJobLoc_6c30c593ca66e0df" style="display: none"></div>
<span class="location accessible-contrast-color-location">Springfield, VA</span>
</div>
<table class="jobCardShelfContainer" role="presentation"><tr class="jobCardShelf"><td class="jobCardShelfItem indeedApply"><span class="jobCardShelfIcon"><svg fill="none" height="16" viewbox="0 0 20 20" width="16"><rect fill="#FF5A1F" height="20" rx="10" width="20"></rect><path clip-rule="evenodd" d="M15.3125 4.0625L10.8125 15.3125L7.99999 11.375L15.3125 4.0625ZM7.604 12.7576L6.875 15.3125L8.567 14.1054L7.604 12.7576ZM7.20463 10.5796L12.419 5.36525L4.0625 9.125L6.9875 10.7968L7.20463 10.5796Z" fill="white" fill-rule="evenodd"></path></svg></span><span class="iaLabel iaIconActive">Easily apply</span></td></tr></table><div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li style="margin-bottom:0px;">HS Diploma + 5 years experience / Bachelor's Degree + 3 years experience.</li>
<li>Working knowledge of current <b>intelligence</b>, threat analysis, and forecasting…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">10 days ago</span><span class="tt_set" id="tt_set_9"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('6c30c593ca66e0df', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('6c30c593ca66e0df', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, '6c30c593ca66e0df', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('6c30c593ca66e0df');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_6c30c593ca66e0df" onclick="changeJobState('6c30c593ca66e0df', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_9" onclick="toggleMoreLinks('6c30c593ca66e0df', '9'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_6c30c593ca66e0df" style="display:none;"></div><script>if (!window['result_6c30c593ca66e0df']) {window['result_6c30c593ca66e0df'] = {};}window['result_6c30c593ca66e0df']['showSource'] = false; window['result_6c30c593ca66e0df']['source'] = "DarkStar Intelligence"; window['result_6c30c593ca66e0df']['loggedIn'] = false; window['result_6c30c593ca66e0df']['showMyJobsLinks'] = false;window['result_6c30c593ca66e0df']['undoAction'] = "unsave";window['result_6c30c593ca66e0df']['relativeJobAge'] = "10 days ago";window['result_6c30c593ca66e0df']['jobKey'] = "6c30c593ca66e0df"; window['result_6c30c593ca66e0df']['myIndeedAvailable'] = true; window['result_6c30c593ca66e0df']['showMoreActionsLink'] = window['result_6c30c593ca66e0df']['showMoreActionsLink'] || true; window['result_6c30c593ca66e0df']['resultNumber'] = 9; window['result_6c30c593ca66e0df']['jobStateChangedToSaved'] = false; window['result_6c30c593ca66e0df']['searchState'] = "q=intelligence analyst&start=2"; window['result_6c30c593ca66e0df']['basicPermaLink'] = "https://www.indeed.com"; window['result_6c30c593ca66e0df']['saveJobFailed'] = false; window['result_6c30c593ca66e0df']['removeJobFailed'] = false; window['result_6c30c593ca66e0df']['requestPending'] = false; window['result_6c30c593ca66e0df']['notesEnabled'] = true; window['result_6c30c593ca66e0df']['currentPage'] = "serp"; window['result_6c30c593ca66e0df']['sponsored'] = false;window['result_6c30c593ca66e0df']['reportJobButtonEnabled'] = false; window['result_6c30c593ca66e0df']['showMyJobsHired'] = false; window['result_6c30c593ca66e0df']['showSaveForSponsored'] = false; window['result_6c30c593ca66e0df']['showJobAge'] = true; window['result_6c30c593ca66e0df']['showHolisticCard'] = true; window['result_6c30c593ca66e0df']['showDislike'] = true; window['result_6c30c593ca66e0df']['showKebab'] = true; window['result_6c30c593ca66e0df']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_9" style="display:none;"><div class="more_actions" id="more_9"><ul><li><span class="mat">View all <a href="/q-Darkstar-Intelligence-l-Springfield,-VA-jobs.html">DarkStar Intelligence jobs in Springfield, VA</a> - <a href="/l-Springfield,-VA-jobs.html">Springfield jobs</a></span></li><li><span class="mat">Salary Search: <a href="/salaries/intelligence-analyst-Salaries,-Springfield-VA" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=serp-more&fromjk=6c30c593ca66e0df&from=serp-more');">Intelligence Analyst salaries in Springfield, VA</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('6c30c593ca66e0df'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_6c30c593ca66e0df_sj"></div>
<div class="mosaic-zone" id="mosaic-zone-afterTenthJobResult"></div><script type="text/javascript">
try {
window.mosaic.onMosaicApiReady(function() {
var zoneId = 'afterTenthJobResult';
var providers = window.mosaic.zonedProviders[zoneId];
if (providers) {
providers.filter(function(p) { return window.mosaic.lazyFns[p]; }).forEach(function(p) {
return window.mosaic.api.loadProvider(p);
});
}
});
} catch (e) {};
</script><div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="5cb310560449d9cf" data-tn-component="organicJob" id="p_5cb310560449d9cf">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=5cb310560449d9cf&fccid=290a4498a64fc044&vjs=3" id="jl_5cb310560449d9cf" onclick="setRefineByCookie([]); return rclk(this,jobmap[10],true,0);" onmousedown="return rclk(this,jobmap[10],0);" rel="noopener nofollow" target="_blank" title="Intelligence Analyst">
<b>Intelligence</b> <b>Analyst</b></a>
<span class="new">new</span></h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/CACI-International-Inc" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=5cb310560449d9cf&jcid=690facff3df3ae47')" rel="noopener" target="_blank">
CACI</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/CACI-International-Inc/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Intelligence+Analyst&fromjk=5cb310560449d9cf&jcid=690facff3df3ae47');" rel="noopener" target="_blank" title="CACI reviews">
<span class="ratingsContent">
3.8<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="Nebraska" id="recJobLoc_5cb310560449d9cf" style="display: none"></div>
<span class="location accessible-contrast-color-location">Nebraska</span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>Possess a Bachelor’s degree in international affairs, national security studies, African studies, international business, international terrorism, trends and…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">Today</span><span class="tt_set" id="tt_set_10"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('5cb310560449d9cf', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('5cb310560449d9cf', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, '5cb310560449d9cf', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('5cb310560449d9cf');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_5cb310560449d9cf" onclick="changeJobState('5cb310560449d9cf', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_10" onclick="toggleMoreLinks('5cb310560449d9cf', '10'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_5cb310560449d9cf" style="display:none;"></div><script>if (!window['result_5cb310560449d9cf']) {window['result_5cb310560449d9cf'] = {};}window['result_5cb310560449d9cf']['showSource'] = false; window['result_5cb310560449d9cf']['source'] = "CACI"; window['result_5cb310560449d9cf']['loggedIn'] = false; window['result_5cb310560449d9cf']['showMyJobsLinks'] = false;window['result_5cb310560449d9cf']['undoAction'] = "unsave";window['result_5cb310560449d9cf']['relativeJobAge'] = "Today";window['result_5cb310560449d9cf']['jobKey'] = "5cb310560449d9cf"; window['result_5cb310560449d9cf']['myIndeedAvailable'] = true; window['result_5cb310560449d9cf']['showMoreActionsLink'] = window['result_5cb310560449d9cf']['showMoreActionsLink'] || true; window['result_5cb310560449d9cf']['resultNumber'] = 10; window['result_5cb310560449d9cf']['jobStateChangedToSaved'] = false; window['result_5cb310560449d9cf']['searchState'] = "q=intelligence analyst&start=2"; window['result_5cb310560449d9cf']['basicPermaLink'] = "https://www.indeed.com"; window['result_5cb310560449d9cf']['saveJobFailed'] = false; window['result_5cb310560449d9cf']['removeJobFailed'] = false; window['result_5cb310560449d9cf']['requestPending'] = false; window['result_5cb310560449d9cf']['notesEnabled'] = true; window['result_5cb310560449d9cf']['currentPage'] = "serp"; window['result_5cb310560449d9cf']['sponsored'] = false;window['result_5cb310560449d9cf']['reportJobButtonEnabled'] = false; window['result_5cb310560449d9cf']['showMyJobsHired'] = false; window['result_5cb310560449d9cf']['showSaveForSponsored'] = false; window['result_5cb310560449d9cf']['showJobAge'] = true; window['result_5cb310560449d9cf']['showHolisticCard'] = true; window['result_5cb310560449d9cf']['showDislike'] = true; window['result_5cb310560449d9cf']['showKebab'] = true; window['result_5cb310560449d9cf']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_10" style="display:none;"><div class="more_actions" id="more_10"><ul><li><span class="mat">View all <a href="/q-CACI-l-Nebraska-jobs.html">CACI jobs in Nebraska</a> - <a href="/l-Nebraska-jobs.html">Nebraska jobs</a></span></li><li><span class="mat">Salary Search: <a href="/salaries/intelligence-analyst-Salaries,-Nebraska" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=serp-more&fromjk=5cb310560449d9cf&from=serp-more');">Intelligence Analyst salaries in Nebraska</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/CACI-International-Inc/about" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=5cb310560449d9cf&from=serp-more&campaignid=serp-more&jcid=690facff3df3ae47');">CACI</a></span></li><li><span class="mat">See popular <a href="/cmp/CACI-International-Inc/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=5cb310560449d9cf&jcid=690facff3df3ae47');">questions & answers about CACI</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('5cb310560449d9cf'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_5cb310560449d9cf_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="d7ced4085f56787c" data-tn-component="organicJob" id="p_d7ced4085f56787c">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=d7ced4085f56787c&fccid=8edf5fae4bf4a9ae&vjs=3" id="jl_d7ced4085f56787c" onclick="setRefineByCookie([]); return rclk(this,jobmap[11],true,1);" onmousedown="return rclk(this,jobmap[11],1);" rel="noopener nofollow" target="_blank" title="Part-Time Flex Intelligence Analyst">
Part-Time Flex <b>Intelligence</b> <b>Analyst</b></a>
<span class="new">new</span></h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/As-Solution-North-America" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=d7ced4085f56787c&jcid=8edf5fae4bf4a9ae')" rel="noopener" target="_blank">
AS Solution North America</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/As-Solution-North-America/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Part-Time+Flex+Intelligence+Analyst&fromjk=d7ced4085f56787c&jcid=8edf5fae4bf4a9ae');" rel="noopener" target="_blank" title="As Solution North America reviews">
<span class="ratingsContent">
4.2<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="Bellevue, WA" id="recJobLoc_d7ced4085f56787c" style="display: none"></div>
<span class="location accessible-contrast-color-location">Bellevue, WA</span>
</div>
<div class="salarySnippet holisticSalary">
<span class="salary no-wrap">
<span class="salaryText">
$32 an hour</span>
</span>
</div>
<div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>Relevant degree e.g. International Relations, <b>Intelligence</b> Studies, International Security Studies or equivalent experience working as an <b>intelligence</b> analyst…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">1 day ago</span><span class="tt_set" id="tt_set_11"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('d7ced4085f56787c', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('d7ced4085f56787c', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, 'd7ced4085f56787c', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('d7ced4085f56787c');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_d7ced4085f56787c" onclick="changeJobState('d7ced4085f56787c', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_11" onclick="toggleMoreLinks('d7ced4085f56787c', '11'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_d7ced4085f56787c" style="display:none;"></div><script>if (!window['result_d7ced4085f56787c']) {window['result_d7ced4085f56787c'] = {};}window['result_d7ced4085f56787c']['showSource'] = false; window['result_d7ced4085f56787c']['source'] = "AS Solution North America"; window['result_d7ced4085f56787c']['loggedIn'] = false; window['result_d7ced4085f56787c']['showMyJobsLinks'] = false;window['result_d7ced4085f56787c']['undoAction'] = "unsave";window['result_d7ced4085f56787c']['relativeJobAge'] = "1 day ago";window['result_d7ced4085f56787c']['jobKey'] = "d7ced4085f56787c"; window['result_d7ced4085f56787c']['myIndeedAvailable'] = true; window['result_d7ced4085f56787c']['showMoreActionsLink'] = window['result_d7ced4085f56787c']['showMoreActionsLink'] || true; window['result_d7ced4085f56787c']['resultNumber'] = 11; window['result_d7ced4085f56787c']['jobStateChangedToSaved'] = false; window['result_d7ced4085f56787c']['searchState'] = "q=intelligence analyst&start=2"; window['result_d7ced4085f56787c']['basicPermaLink'] = "https://www.indeed.com"; window['result_d7ced4085f56787c']['saveJobFailed'] = false; window['result_d7ced4085f56787c']['removeJobFailed'] = false; window['result_d7ced4085f56787c']['requestPending'] = false; window['result_d7ced4085f56787c']['notesEnabled'] = true; window['result_d7ced4085f56787c']['currentPage'] = "serp"; window['result_d7ced4085f56787c']['sponsored'] = false;window['result_d7ced4085f56787c']['reportJobButtonEnabled'] = false; window['result_d7ced4085f56787c']['showMyJobsHired'] = false; window['result_d7ced4085f56787c']['showSaveForSponsored'] = false; window['result_d7ced4085f56787c']['showJobAge'] = true; window['result_d7ced4085f56787c']['showHolisticCard'] = true; window['result_d7ced4085f56787c']['showDislike'] = true; window['result_d7ced4085f56787c']['showKebab'] = true; window['result_d7ced4085f56787c']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_11" style="display:none;"><div class="more_actions" id="more_11"><ul><li><span class="mat">View all <a href="/q-As-Solution-North-America-l-Bellevue,-WA-jobs.html">AS Solution North America jobs in Bellevue, WA</a> - <a href="/l-Bellevue,-WA-jobs.html">Bellevue jobs</a></span></li><li><span class="mat">Salary Search: <a href="/salaries/intelligence-analyst-Salaries,-Bellevue-WA" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=serp-more&fromjk=d7ced4085f56787c&from=serp-more');">Intelligence Analyst salaries in Bellevue, WA</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/As-Solution-North-America" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=d7ced4085f56787c&from=serp-more&campaignid=serp-more&jcid=8edf5fae4bf4a9ae');">AS Solution North America</a></span></li><li><span class="mat">See popular <a href="/cmp/As-Solution-North-America/faq" onmousedown="this.href = appendParamsOnce(this.href, '?from=serp-more&campaignid=serp-more&fromjk=d7ced4085f56787c&jcid=8edf5fae4bf4a9ae');">questions & answers about AS Solution North America</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('d7ced4085f56787c'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_d7ced4085f56787c_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="fdc8b83168e5a725" data-tn-component="organicJob" id="p_fdc8b83168e5a725">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=fdc8b83168e5a725&fccid=2b80669a5da4266c&vjs=3" id="jl_fdc8b83168e5a725" onclick="setRefineByCookie([]); return rclk(this,jobmap[12],true,0);" onmousedown="return rclk(this,jobmap[12],0);" rel="noopener nofollow" target="_blank" title="Intelligence Analyst">
<b>Intelligence</b> <b>Analyst</b></a>
</h2>
<div class="sjcl">
<div>
<span class="company">
<a class="turnstileLink" data-tn-element="companyName" href="/cmp/Ashburn-Consulting" onmousedown="this.href = appendParamsOnce(this.href, 'from=SERP&campaignid=serp-linkcompanyname&fromjk=fdc8b83168e5a725&jcid=ef90a1381d444ab9')" rel="noopener" target="_blank">
Ashburn Consulting</a></span>
<span class="ratingsDisplay">
<a class="ratingNumber" data-tn-variant="cmplinktst2" href="/cmp/Ashburn-Consulting/reviews" onmousedown="this.href = appendParamsOnce(this.href, '?campaignid=cmplinktst2&from=SERP&jt=Intelligence+Analyst&fromjk=fdc8b83168e5a725&jcid=ef90a1381d444ab9');" rel="noopener" target="_blank" title="Ashburn Consulting reviews">
<span class="ratingsContent">
3.5<svg class="starIcon" height="12px" role="img" width="12px">
<g>
<path d="M 12.00,4.34 C 12.00,4.34 7.69,3.97 7.69,3.97 7.69,3.97 6.00,0.00 6.00,0.00 6.00,0.00 4.31,3.98 4.31,3.98 4.31,3.98 0.00,4.34 0.00,4.34 0.00,4.34 3.28,7.18 3.28,7.18 3.28,7.18 2.29,11.40 2.29,11.40 2.29,11.40 6.00,9.16 6.00,9.16 6.00,9.16 9.71,11.40 9.71,11.40 9.71,11.40 8.73,7.18 8.73,7.18 8.73,7.18 12.00,4.34 12.00,4.34 Z" style="fill: #FFB103"></path>
</g>
</svg>
</span>
</a>
</span>
</div>
<div class="recJobLoc" data-rc-loc="Fairfax, VA" id="recJobLoc_fdc8b83168e5a725" style="display: none"></div>
<span class="location accessible-contrast-color-location">Fairfax, VA</span>
</div>
<table class="jobCardShelfContainer" role="presentation"><tr class="jobCardShelf"><td class="jobCardShelfItem indeedApply"><span class="jobCardShelfIcon"><svg fill="none" height="16" viewbox="0 0 20 20" width="16"><rect fill="#FF5A1F" height="20" rx="10" width="20"></rect><path clip-rule="evenodd" d="M15.3125 4.0625L10.8125 15.3125L7.99999 11.375L15.3125 4.0625ZM7.604 12.7576L6.875 15.3125L8.567 14.1054L7.604 12.7576ZM7.20463 10.5796L12.419 5.36525L4.0625 9.125L6.9875 10.7968L7.20463 10.5796Z" fill="white" fill-rule="evenodd"></path></svg></span><span class="iaLabel iaIconActive">Easily apply</span></td></tr></table><div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li style="margin-bottom:0px;">Demonstrated ability to create and provide <b>intelligence</b> briefings for all levels of personnel.</li>
<li>Ability to effectively collect, analyze, summarize, interpret,…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">29 days ago</span><span class="tt_set" id="tt_set_12"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('fdc8b83168e5a725', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('fdc8b83168e5a725', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, 'fdc8b83168e5a725', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('fdc8b83168e5a725');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_fdc8b83168e5a725" onclick="changeJobState('fdc8b83168e5a725', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_12" onclick="toggleMoreLinks('fdc8b83168e5a725', '12'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_fdc8b83168e5a725" style="display:none;"></div><script>if (!window['result_fdc8b83168e5a725']) {window['result_fdc8b83168e5a725'] = {};}window['result_fdc8b83168e5a725']['showSource'] = false; window['result_fdc8b83168e5a725']['source'] = "Ashburn Consulting"; window['result_fdc8b83168e5a725']['loggedIn'] = false; window['result_fdc8b83168e5a725']['showMyJobsLinks'] = false;window['result_fdc8b83168e5a725']['undoAction'] = "unsave";window['result_fdc8b83168e5a725']['relativeJobAge'] = "29 days ago";window['result_fdc8b83168e5a725']['jobKey'] = "fdc8b83168e5a725"; window['result_fdc8b83168e5a725']['myIndeedAvailable'] = true; window['result_fdc8b83168e5a725']['showMoreActionsLink'] = window['result_fdc8b83168e5a725']['showMoreActionsLink'] || true; window['result_fdc8b83168e5a725']['resultNumber'] = 12; window['result_fdc8b83168e5a725']['jobStateChangedToSaved'] = false; window['result_fdc8b83168e5a725']['searchState'] = "q=intelligence analyst&start=2"; window['result_fdc8b83168e5a725']['basicPermaLink'] = "https://www.indeed.com"; window['result_fdc8b83168e5a725']['saveJobFailed'] = false; window['result_fdc8b83168e5a725']['removeJobFailed'] = false; window['result_fdc8b83168e5a725']['requestPending'] = false; window['result_fdc8b83168e5a725']['notesEnabled'] = true; window['result_fdc8b83168e5a725']['currentPage'] = "serp"; window['result_fdc8b83168e5a725']['sponsored'] = false;window['result_fdc8b83168e5a725']['reportJobButtonEnabled'] = false; window['result_fdc8b83168e5a725']['showMyJobsHired'] = false; window['result_fdc8b83168e5a725']['showSaveForSponsored'] = false; window['result_fdc8b83168e5a725']['showJobAge'] = true; window['result_fdc8b83168e5a725']['showHolisticCard'] = true; window['result_fdc8b83168e5a725']['showDislike'] = true; window['result_fdc8b83168e5a725']['showKebab'] = true; window['result_fdc8b83168e5a725']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_12" style="display:none;"><div class="more_actions" id="more_12"><ul><li><span class="mat">View all <a href="/q-Ashburn-Consulting-l-Fairfax,-VA-jobs.html">Ashburn Consulting jobs in Fairfax, VA</a> - <a href="/l-Fairfax,-VA-jobs.html">Fairfax jobs</a></span></li><li><span class="mat">Learn more about working at <a href="/cmp/Ashburn-Consulting" onmousedown="this.href = appendParamsOnce(this.href, '?fromjk=fdc8b83168e5a725&from=serp-more&campaignid=serp-more&jcid=ef90a1381d444ab9');">Ashburn Consulting</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('fdc8b83168e5a725'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_fdc8b83168e5a725_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="daa6eb5bb511c8f4" data-tn-component="organicJob" id="p_daa6eb5bb511c8f4">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/company/Samaritan-Protective-Services/jobs/Intelligence-Analyst-daa6eb5bb511c8f4?fccid=5743859864338379&vjs=3" id="jl_daa6eb5bb511c8f4" onclick="setRefineByCookie([]); return rclk(this,jobmap[13],true,1);" onmousedown="return rclk(this,jobmap[13],1);" rel="noopener nofollow" target="_blank" title="Intelligence Analyst">
<b>Intelligence</b> <b>Analyst</b></a>
</h2>
<div class="sjcl">
<div>
<span class="company">
Samaritan Protective Services</span>
</div>
<div class="recJobLoc" data-rc-loc="Woodbridge, VA" id="recJobLoc_daa6eb5bb511c8f4" style="display: none"></div>
<span class="location accessible-contrast-color-location">Woodbridge, VA 22192</span>
</div>
<div class="salarySnippet holisticSalary">
<span class="salary no-wrap">
<span class="salaryText">
Up to $25 an hour</span>
</span>
</div>
<div class="jobCardReqContainer"><div class="jobCardReqHeader">Requirements</div><div class="jobCardReqList"><div class="jobCardReqItem">Language: English</div><div class="jobCardReqItem">Bachelor's</div></div></div><table class="jobCardShelfContainer" role="presentation"><tr class="jobCardShelf"><td class="jobCardShelfItem indeedApply"><span class="jobCardShelfIcon"><svg fill="none" height="16" viewbox="0 0 20 20" width="16"><rect fill="#FF5A1F" height="20" rx="10" width="20"></rect><path clip-rule="evenodd" d="M15.3125 4.0625L10.8125 15.3125L7.99999 11.375L15.3125 4.0625ZM7.604 12.7576L6.875 15.3125L8.567 14.1054L7.604 12.7576ZM7.20463 10.5796L12.419 5.36525L4.0625 9.125L6.9875 10.7968L7.20463 10.5796Z" fill="white" fill-rule="evenodd"></path></svg></span><span class="iaLabel iaIconActive">Easily apply</span></td></tr></table><div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li style="margin-bottom:0px;">Degree in political science, <b>intelligence</b> studies or related field, preferred.</li>
<li style="margin-bottom:0px;">Intelligence Analysis: 5 years (Preferred).</li>
<li>Abide by all security protocols.</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">19 days ago</span><span class="tt_set" id="tt_set_13"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('daa6eb5bb511c8f4', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('daa6eb5bb511c8f4', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, 'daa6eb5bb511c8f4', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('daa6eb5bb511c8f4');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_daa6eb5bb511c8f4" onclick="changeJobState('daa6eb5bb511c8f4', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_13" onclick="toggleMoreLinks('daa6eb5bb511c8f4', '13'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_daa6eb5bb511c8f4" style="display:none;"></div><script>if (!window['result_daa6eb5bb511c8f4']) {window['result_daa6eb5bb511c8f4'] = {};}window['result_daa6eb5bb511c8f4']['showSource'] = false; window['result_daa6eb5bb511c8f4']['source'] = "Indeed"; window['result_daa6eb5bb511c8f4']['loggedIn'] = false; window['result_daa6eb5bb511c8f4']['showMyJobsLinks'] = false;window['result_daa6eb5bb511c8f4']['undoAction'] = "unsave";window['result_daa6eb5bb511c8f4']['relativeJobAge'] = "19 days ago";window['result_daa6eb5bb511c8f4']['jobKey'] = "daa6eb5bb511c8f4"; window['result_daa6eb5bb511c8f4']['myIndeedAvailable'] = true; window['result_daa6eb5bb511c8f4']['showMoreActionsLink'] = window['result_daa6eb5bb511c8f4']['showMoreActionsLink'] || true; window['result_daa6eb5bb511c8f4']['resultNumber'] = 13; window['result_daa6eb5bb511c8f4']['jobStateChangedToSaved'] = false; window['result_daa6eb5bb511c8f4']['searchState'] = "q=intelligence analyst&start=2"; window['result_daa6eb5bb511c8f4']['basicPermaLink'] = "https://www.indeed.com"; window['result_daa6eb5bb511c8f4']['saveJobFailed'] = false; window['result_daa6eb5bb511c8f4']['removeJobFailed'] = false; window['result_daa6eb5bb511c8f4']['requestPending'] = false; window['result_daa6eb5bb511c8f4']['notesEnabled'] = true; window['result_daa6eb5bb511c8f4']['currentPage'] = "serp"; window['result_daa6eb5bb511c8f4']['sponsored'] = false;window['result_daa6eb5bb511c8f4']['reportJobButtonEnabled'] = false; window['result_daa6eb5bb511c8f4']['showMyJobsHired'] = false; window['result_daa6eb5bb511c8f4']['showSaveForSponsored'] = false; window['result_daa6eb5bb511c8f4']['showJobAge'] = true; window['result_daa6eb5bb511c8f4']['showHolisticCard'] = true; window['result_daa6eb5bb511c8f4']['showDislike'] = true; window['result_daa6eb5bb511c8f4']['showKebab'] = true; window['result_daa6eb5bb511c8f4']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_13" style="display:none;"><div class="more_actions" id="more_13"><ul><li><span class="mat">View all <a href="/q-Samaritan-Protective-Services-l-Woodbridge,-VA-jobs.html">Samaritan Protective Services jobs in Woodbridge, VA</a> - <a href="/l-Woodbridge,-VA-jobs.html">Woodbridge jobs</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('daa6eb5bb511c8f4'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_daa6eb5bb511c8f4_sj"></div>
<div class="jobsearch-SerpJobCard unifiedRow row result" data-jk="4ab393028b08495d" data-tn-component="organicJob" id="p_4ab393028b08495d">
<h2 class="title">
<a class="jobtitle turnstileLink" data-tn-element="jobTitle" href="/rc/clk?jk=4ab393028b08495d&fccid=bcfb998b053bf6ab&vjs=3" id="jl_4ab393028b08495d" onclick="setRefineByCookie([]); return rclk(this,jobmap[14],true,0);" onmousedown="return rclk(this,jobmap[14],0);" rel="noopener nofollow" target="_blank" title="Intelligence Analyst">
<b>Intelligence</b> <b>Analyst</b></a>
<span class="new">new</span></h2>
<div class="sjcl">
<div>
<span class="company">
Semantic AI</span>
</div>
<div class="recJobLoc" data-rc-loc="Alexandria, VA" id="recJobLoc_4ab393028b08495d" style="display: none"></div>
<span class="location accessible-contrast-color-location">Alexandria, VA</span>
</div>
<table class="jobCardShelfContainer" role="presentation"><tr class="jobCardShelf"><td class="jobCardShelfItem indeedApply"><span class="jobCardShelfIcon"><svg fill="none" height="16" viewbox="0 0 20 20" width="16"><rect fill="#FF5A1F" height="20" rx="10" width="20"></rect><path clip-rule="evenodd" d="M15.3125 4.0625L10.8125 15.3125L7.99999 11.375L15.3125 4.0625ZM7.604 12.7576L6.875 15.3125L8.567 14.1054L7.604 12.7576ZM7.20463 10.5796L12.419 5.36525L4.0625 9.125L6.9875 10.7968L7.20463 10.5796Z" fill="white" fill-rule="evenodd"></path></svg></span><span class="iaLabel iaIconActive">Easily apply</span></td></tr></table><div class="summary">
<ul style="list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;">
<li>Advising and mentoring customer <b>analysts</b> on the use of <b>intelligence</b> analysis software, to include introducing new software tools and supporting training on its…</li>
</ul></div>
<div class="jobsearch-SerpJobCard-footer">
<div class="jobsearch-SerpJobCard-footerActions">
<div class="result-link-bar-container">
<div class="result-link-bar"><span class="date">Today</span><span class="tt_set" id="tt_set_14"><div class="job-reaction"><button aria-expanded="false" aria-haspopup="true" aria-label="save or dislike" class="job-reaction-kebab" data-ol-has-click-handler="" onclick="toggleKebabMenu('4ab393028b08495d', false, event); return false;" tabindex="0"></button><span class="job-reaction-kebab-menu"><button class="job-reaction-kebab-item job-reaction-save" data-ol-has-click-handler="" onclick="changeJobState('4ab393028b08495d', 'save', 'linkbar', false, '');return false;"><svg focusable="false" height="16" viewbox="0 0 24 24" width="16"><g><path d="M16.5,3A6,6,0,0,0,12,5.09,6,6,0,0,0,7.5,3,5.45,5.45,0,0,0,2,8.5C2,12.28,5.4,15.36,10.55,20L12,21.35,13.45,20C18.6,15.36,22,12.28,22,8.5A5.45,5.45,0,0,0,16.5,3ZM12.1,18.55l-0.1.1-0.1-.1C7.14,14.24,4,11.39,4,8.5A3.42,3.42,0,0,1,7.5,5a3.91,3.91,0,0,1,3.57,2.36h1.87A3.88,3.88,0,0,1,16.5,5,3.42,3.42,0,0,1,20,8.5C20,11.39,16.86,14.24,12.1,18.55Z" fill="#2d2d2d"></path></g></svg><span class="job-reaction-kebab-item-text">Save job</span></button><button class="job-reaction-kebab-item job-reaction-dislike" data-ol-has-click-handler="" onclick="dislikeJob(false, false, '4ab393028b08495d', 'unsave', 'linkbar', false, '');"><span class="job-reaction-dislike-icon"></span><span class="job-reaction-kebab-item-text">Not interested</span></button><button class="job-reaction-kebab-item job-reaction-report" onclick="reportJob('4ab393028b08495d');"><span class="job-reaction-report-icon"></span><span class="job-reaction-kebab-item-text">Report Job</span></button></span></div><span class="result-link-bar-separator">·</span><a class="sl resultLink save-job-link" href="#" id="sj_4ab393028b08495d" onclick="changeJobState('4ab393028b08495d', 'save', 'linkbar', false, ''); return false;" title="Save this job to my.indeed">Save job</a><span class="result-link-bar-separator">·</span><button aria-expanded="false" class="sl resultLink more-link" id="tog_14" onclick="toggleMoreLinks('4ab393028b08495d', '14'); return false;">More...</button></span><div class="edit_note_content" id="editsaved2_4ab393028b08495d" style="display:none;"></div><script>if (!window['result_4ab393028b08495d']) {window['result_4ab393028b08495d'] = {};}window['result_4ab393028b08495d']['showSource'] = false; window['result_4ab393028b08495d']['source'] = "Semantic AI"; window['result_4ab393028b08495d']['loggedIn'] = false; window['result_4ab393028b08495d']['showMyJobsLinks'] = false;window['result_4ab393028b08495d']['undoAction'] = "unsave";window['result_4ab393028b08495d']['relativeJobAge'] = "Today";window['result_4ab393028b08495d']['jobKey'] = "4ab393028b08495d"; window['result_4ab393028b08495d']['myIndeedAvailable'] = true; window['result_4ab393028b08495d']['showMoreActionsLink'] = window['result_4ab393028b08495d']['showMoreActionsLink'] || true; window['result_4ab393028b08495d']['resultNumber'] = 14; window['result_4ab393028b08495d']['jobStateChangedToSaved'] = false; window['result_4ab393028b08495d']['searchState'] = "q=intelligence analyst&start=2"; window['result_4ab393028b08495d']['basicPermaLink'] = "https://www.indeed.com"; window['result_4ab393028b08495d']['saveJobFailed'] = false; window['result_4ab393028b08495d']['removeJobFailed'] = false; window['result_4ab393028b08495d']['requestPending'] = false; window['result_4ab393028b08495d']['notesEnabled'] = true; window['result_4ab393028b08495d']['currentPage'] = "serp"; window['result_4ab393028b08495d']['sponsored'] = false;window['result_4ab393028b08495d']['reportJobButtonEnabled'] = false; window['result_4ab393028b08495d']['showMyJobsHired'] = false; window['result_4ab393028b08495d']['showSaveForSponsored'] = false; window['result_4ab393028b08495d']['showJobAge'] = true; window['result_4ab393028b08495d']['showHolisticCard'] = true; window['result_4ab393028b08495d']['showDislike'] = true; window['result_4ab393028b08495d']['showKebab'] = true; window['result_4ab393028b08495d']['showReport'] = true;</script></div></div>
</div>
</div>
<div class="tab-container">
<div class="more-links-container result-tab" id="tt_display_14" style="display:none;"><div class="more_actions" id="more_14"><ul><li><span class="mat">View all <a href="/q-Semantic-Ai-l-Alexandria,-VA-jobs.html">Semantic AI jobs in Alexandria, VA</a> - <a href="/l-Alexandria,-VA-jobs.html">Alexandria jobs</a></span></li></ul></div><a class="close-link closeLink" href="#" onclick="toggleMoreLinks('4ab393028b08495d'); return false;" title="Close"></a></div><div class="dya-container result-tab"></div>
<div class="tellafriend-container result-tab email_job_content"></div>
<div class="sign-in-container result-tab"></div>
<div class="notes-container result-tab"></div>
</div>
</div>
<div class="jobToJobRec_Hide" id="jobToJobRec_4ab393028b08495d_sj"></div>
<script type="text/javascript">
function ptk(st,p) {
document.cookie = 'PTK="tk=&type=jobsearch&subtype=' + st + (p ? '&' + p : '')
+ (st == 'pagination' ? '&fp=2' : '')
+'"; path=/';
}
</script>
<script type="text/javascript">
function pclk(event) {
var evt = event || window.event;
var target = evt.target || evt.srcElement;
var el = target.nodeType == 1 ? target : target.parentNode;
var tag = el.tagName.toLowerCase();
if (tag == 'span' || tag == 'a') {
ptk('pagination');
}
return true;
}
function addPPUrlParam(obj) {
var pp = obj.getAttribute('data-pp');
var href = obj.getAttribute('href');
if (pp && href) {
obj.setAttribute('href', href + '&pp=' + pp);
}
}
</script>
<nav aria-label="pagination" role="navigation"><div class="pagination" onmousedown="pclk(event);">
<ul class="pagination-list"><li><a aria-label="Previous" href="/jobs?q=intelligence+analyst" rel="nofollow"><span class="pn"><span class="np"><svg fill="none" height="24" width="24"><path d="M15.41 7.41L14 6l-6 6 6 6 1.41-1.41L10.83 12l4.58-4.59z" fill="#2D2D2D"></path></svg></span></span></a></li><li><a aria-label="1" href="/jobs?q=intelligence+analyst" rel="nofollow"><span class="pn">1</span></a></li><li><b aria-current="true" aria-label="2" tabindex="0">2</b></li><li><a aria-label="3" data-pp="gQAeAAAAAAAAAAAAAAABjvqINwBJAQEBBwIVB6C-ejGvv7Ptw5Nh18LPaxYwI21WoWQrdUi7Bjb4Jh2XnwnWEam_h1Lk1UAJf9p7vRSOZmyC1lKWFkZlzPxauvT4ywAA" href="/jobs?q=intelligence+analyst&start=20" onmousedown="addPPUrlParam && addPPUrlParam(this);" rel="nofollow"><span class="pn">3</span></a></li><li><a aria-label="4" data-pp="gQAtAAAAAAAAAAAAAAABjvqINwBpAQEBDAElLD8b4CWAgvzfZP17Veja6gKe1ywhq6x5EzE_FLTPUxKma2v14M8RCnS6YKvdj00lGyVZEdpNPEa_BTOVTEyXi33qyVCNb-GkwQRxmvZpyX5-9S4gHsmSqt0bPmNI2WboLSdwAAA" href="/jobs?q=intelligence+analyst&start=30" onmousedown="addPPUrlParam && addPPUrlParam(this);" rel="nofollow"><span class="pn">4</span></a></li><li><a aria-label="Next" data-pp="gQAeAAAAAAAAAAAAAAABjvqINwBJAQEBBwIVB6C-ejGvv7Ptw5Nh18LPaxYwI21WoWQrdUi7Bjb4Jh2XnwnWEam_h1Lk1UAJf9p7vRSOZmyC1lKWFkZlzPxauvT4ywAA" href="/jobs?q=intelligence+analyst&start=20" onmousedown="addPPUrlParam && addPPUrlParam(this);" rel="nofollow"><span class="pn"><span class="np"><svg fill="none" height="24" width="24"><path d="M10 6L8.59 7.41 13.17 12l-4.58 4.59L10 18l6-6-6-6z" fill="#2D2D2D"></path></svg></span></span></a></li></ul></div>
</nav><div class="mosaic-zone" id="mosaic-zone-belowJobResultsPagination"><div class="mosaic mosaic-provider-jsfe-career-questions" id="mosaic-provider-jsfe-career-questions"></div></div><script type="text/javascript">
try {
window.mosaic.onMosaicApiReady(function() {
var zoneId = 'belowJobResultsPagination';
var providers = window.mosaic.zonedProviders[zoneId];
if (providers) {
providers.filter(function(p) { return window.mosaic.lazyFns[p]; }).forEach(function(p) {
return window.mosaic.api.loadProvider(p);
});
}
});
} catch (e) {};
</script></td>
###Markdown
Save Data to DatabaseNow we find the div tag contains the job posts. We need to identify the job title, company, ratings, reviews, salary, and summary. We can save those records to our table in the database.
###Code
# identify the job title, company, ratings, reviews, salary, and summary
for div_row in td_resultsCol.find_all('div', class_='jobsearch-SerpJobCard unifiedRow row result'):
# find job title
job_title = None
job_company = None
job_rating = None
job_loc = None
job_salary = None
job_summary = None
for h2_title in div_row.find_all('h2', class_ = 'title'):
job_title = h2_title.a.text.strip().replace("'","_")
for div_dsc in div_row.find_all('div', class_ = 'sjcl'):
#find company name
for span_company in div_dsc.find_all('span', class_ = 'company'):
job_company = span_company.text.strip().replace("'","_")
# find location
for div_loc in div_dsc.find_all('div', class_ = 'location accessible-contrast-color-location'):
job_loc = div_loc.text.strip().replace("'","_")
# find salary
for div_salary in div_row.find_all('div',class_ ='salarySnippet'):
job_salary = div_salary.text.strip().replace("'","_")
#find summary
for div_summary in div_row.find_all('div', class_ = 'summary'):
job_summary = div_summary.text.strip().replace("'","_")
# insert into database
sql_insert = """
insert into gp32.indeed(job_title,job_company,job_loc,job_salary,job_summary)
values('{}','{}','{}','{}','{}')
""".format(job_title,job_company,job_loc,job_salary,job_summary)
cur.execute(sql_insert)
conn.commit()
###Output
_____no_output_____
###Markdown
View the Table
###Code
df = pandas.read_sql_query('select * from gp32.indeed',conn)
df[:]
###Output
_____no_output_____
###Markdown
Query the Table
###Code
df = pandas.read_sql_query('select count(*) as count,job_title from gp32.indeed group by job_title order by count desc ', conn)
df.plot.bar(x='job_title')
cur.close()
conn.close()
###Output
_____no_output_____ |
Application.ipynb | ###Markdown
The price of one apple and one orange
###Code
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = np.linalg.solve(A,B)
print(X)
inv_A= np.linalg.inv(A)
print(inv_A)
X = np.linalg.inv(A).dot(B)
print(X)
X = np.dot(inv_A,B)
print(X)
###Output
[[10.]
[15.]]
###Markdown
Solving for Linear Equations with unknown variables of x,y, and z.
###Code
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
B = np.array([[25],[-10],[-4]])
print(B)
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
The price of one apple and one orange
###Code
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = solve(A,B)
print(X)
inv_A = np.linalg.inv(A)
print(inv_A)
X = np.linalg.inv(A).dot(B)
print(X)
###Output
[[ 0.08148148 -0.03703704]
[-0.06296296 0.07407407]]
[[10.]
[15.]]
###Markdown
Solving for three linear equations with unknown variables x, y, and z
###Code
#4x+3y+2z=25
#-2z+2y+3z=-10
#3x-5y+2z=-4
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
B = np.array([[25],[-10],[-4]])
print(A)
print(B)
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
The price of one apple and orange
###Code
import numpy as np
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = np.linalg.solve(A,B)
print(X)
inv_A= np.linalg.inv(A)
print(inv_A)
X = np.linalg.inv(A).dot(B)
print(X)
X = np.dot(inv_A,B)
print(X)
###Output
[[10.]
[15.]]
###Markdown
Solving for three linear equations with unknown cariable of x,y, and z 4x+3y+2z=25-2z+2y+3z=-103x-5y+2z=-4
###Code
import numpy as np
from scipy.linalg import solve
A=np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
B=np.array([[25],[-10],[-4]])
print(B)
X=solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
Define HyperParameters
###Code
args_noise_estimation = True # wheter to estimate noise or not
args_init = True # wheter to initialize the input with bilinear
args_use_gpu = True
args_block_size = (512, 512)
args_model = 'pretrained_models/bayer_noisy/' # model path
# Define folder with RAW images
args_img_folder = '/home/datasets/raise/' # folder of RAW images
args_output_folder = 'output/' # save results to folder
args_type = '.png' # image type to save as
if 'xtrans' in args_model:
args_pattern = 'xtrans'
else:
args_pattern = 'RGGB'
###Output
_____no_output_____
###Markdown
Load Model
###Code
model_params = torch.load(args_model+'model_best.pth')
model = ResNet_Den(BasicBlock, model_params[2], weightnorm=True)
mmnet = MMNet(model, max_iter=model_params[1])
for param in mmnet.parameters():
param.requires_grad = False
mmnet.load_state_dict(model_params[0])
if args_use_gpu:
mmnet = mmnet.cuda()
###Output
_____no_output_____
###Markdown
Process Images
###Code
if not os.path.exists(args_output_folder):
os.makedirs(args_output_folder)
filepaths_img = glob.glob(args_img_folder+'*')
filepaths_img.sort()
filepaths_img = np.random.choice(filepaths_img, 50, replace=False)
cnt = 0
for img_path in tqdm(filepaths_img):
try:
if cnt > 50:
break
else:
cnt += 1
print('Processing ', img_path)
call(["dcraw","-j","-d","-T","-4","-w", "+M", img_path])
# convert to RGGB CFA
rollx, rolly = check_pattern(img_path)
img_path = img_path.split(".")
img_path[-1] = '.tiff'
img_path = "".join(img_path)
img = io.imread(img_path)
img = np.roll(img, rollx,rolly)
res = rescale_to_255f(img)
# pad according to block size
if res.shape[0] % args_block_size[0] != 0:
mod = args_block_size[0]- res.shape[0] % args_block_size[0]
res = np.pad(res, ((0,mod),(0,0)), 'constant')
if res.shape[1] % args_block_size[1] != 0:
mod = args_block_size[1]- res.shape[1] % args_block_size[1]
res = np.pad(res, ((0,0), (0,mod)), 'constant')
blocks = util.view_as_blocks(res, block_shape=args_block_size)
def process_patch(patch):
with torch.no_grad():
mmnet.eval()
mosaic = torch.FloatTensor(patch).float()[None]
# padding in order to ensure no boundary artifacts
mosaic = F.pad(mosaic[:,None],(8,8,8,8),'reflect')[:,0]
shape = mosaic[0].shape
mask = utils.generate_mask(shape, pattern=args_pattern)
M = torch.FloatTensor(mask)[None]
mosaic = mosaic[...,None]*M
mosaic = mosaic.permute(0,3,1,2)
M = M.permute(0,3,1,2)
p = Demosaic(mosaic.float(), M.float())
if args_use_gpu:
p.cuda_()
xcur = mmnet.forward_all_iter(p, max_iter=mmnet.max_iter, init=args_init, noise_estimation=args_noise_estimation)
return xcur[0].cpu().data.permute(1,2,0).numpy()[8:-8,8:-8]
# demosaick image
block_size = blocks.shape[-2:]
num_blocks = blocks.shape[:2]
original_size = (blocks.shape[0] * blocks.shape[2], blocks.shape[1] * blocks.shape[3])
final_img = np.zeros((original_size[0], original_size[1],3), dtype=np.float32)
for i in range(num_blocks[0]):
for j in range(num_blocks[1]):
patch_result = process_patch(blocks[i,j])
final_img[i*block_size[0]:(i+1)*block_size[0], j*block_size[1]:(j+1)*block_size[1]] = patch_result
final_img = final_img[:img.shape[0],:img.shape[1]]
final_img = np.roll(final_img, -rollx, -rolly)
# remove intermmidiate .tiff image
call(["rm", img_path])
img_path = img_path.replace(args_img_folder, args_output_folder)
# save the linRGB image
io.imsave(img_path.replace('.tiff', args_type),final_img.astype(np.uint8))
# save the sRGB image
srgb = linrgb_to_srgb(final_img/255)
io.imsave(img_path.replace('.tiff', '_srgb'+args_type),srgb.clip(0,1))
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
The price of one apple and one orange
###Code
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = solve(A,B)
print(X)
inv_A = np.linalg.inv(A)
print(inv_A)
X = np.linalg.inv(A).dot(B)
print(X)
X = np.dot(inv_A,B)
print(X)
###Output
[[10.]
[15.]]
###Markdown
Solving for three linear equations with unknown variables of x, y, and z
###Code
#4x+3y+2z=25
#-2z+2y+3z=-10
#3x-5y+2z=-4
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
B = np.array([[25],[-10],[-4]])
print(B)
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
The price of one apple and one orange
###Code
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = solve(A,B)
print(X)
inv_A=np.linalg.inv(A)
print(inv_A)
X = np.linalg.inv(A).dot(B)
print(X)
X=np.dot(inv_A,B)
print(X)
###Output
[[10.]
[15.]]
###Markdown
Solving for three linear equations with unknown variables of x, y, and z
###Code
#4x+3y+2z=25
#-2z+2y+3z=-10
#3x-5y+2z=-4
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
B = np.array([[25],[-10],[-4]])
print(B)
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
###Code
import numpy as np
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
#other step on how to solve linear equations with NumPy and linalg.solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = np.linalg.solve(A,B)
print(X)
import numpy as np
from scipy.linalg import solve #other option you can try using SciPy
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = solve (A,B)
print(X)
inv_A = np.linalg.inv(A) #Inverse of A
print(inv_A)
X = np.linalg.inv(A).dot(B) #unknown values of determining the x and y
print(X)
#other option that you can try
X = np.dot(inv_A,B)
print(X)
#4x+3y+2z=25
#-2z+2y+3z=-10
#3x-5y+2x=-4
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
B = np.array([[25],[-10],[-4]])
print (B)
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
November 17
###Code
import numpy as np
A=np.array([[4,3,2],[-2,2,3],[3,-5,2]])
B=np.array([[25],[-10],[-4]])
print(A,"\n \n",B)
x=np.linalg.solve(A,B)
print("\n Answer: \n",x)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
Answer:
[[ 5.]
[ 3.]
[-2.]]
###Markdown
**using scipy.linalg**
###Code
import numpy as np
from scipy.linalg import solve
A=np.array([[4,3,2],[-2,2,3],[3,-5,2]])
B=np.array([[25],[-10],[-4]])
print(A,"\n \n",B)
x=solve(A,B)
print("\n Answer: \n",x)
###Output
_____no_output_____
###Markdown
The price of one apple and one orange
###Code
import numpy as np
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
inv_A = np.linalg.inv(A)
print(inv_A)
X = np.linalg.inv(A).dot(B)
print(X)
X = np.dot(inv_A,B)
print(X)
import numpy as np
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = np.linalg.solve(A,B)
print(X)
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = solve(A,B)
print(X)
#4x+3y2z=25
#-2z+2y+3y=-10
#3x-5y+2z=4
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
B = np.array([[25],[-10],[-4]])
print(B)
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
###Code
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = solve(A,B)
print(X)
inv_A = np.linalg.inv(A)
print(inv_A)
X = np.linalg.inv(A).dot(B)
print(X)
X = np.dot(inv_A,B)
print(X)
#4x+3y+2z=25
#-2z+2y+3z=-10
#3x-5y+2z=-4
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
B = np.array([[25],[-10],[-4]])
print(B)
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
###Code
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]]) #Creation of 2x2 Matrix
B = np.array([[350],[500]]) #Creation of Matrix
#to print
print(A)
print()
print(B)
print()
X= np.linalg.solve(A,B) #way to solve
print(X)
X= solve(A,B) #solving using scipy library
print()
print(X)
inv_A = np.linalg.inv(A) #To inverse A
print(inv_A)
X = np.linalg.inv(A).dot(B) #dot product of Inv of A and B
print(X)
X = np.dot(inv_A,B) #Checking
X
###Output
_____no_output_____
###Markdown
Solving for three linear equations with unknown variables of x, y, and z
###Code
#4x+3y+2z=25
#-2x-2y+3z=-10
# 3x-5y+2z=-4
A= np.array([[4,3,2],[-2,2,3],[3,-5,2]]) #Creation of 3x3 Matrix
B= np.array([[25],[-10],[-4]]) #Creation of Matrix
#To Print
print(A)
print(B)
X= solve(A,B) #Solving using scipy
print("The Solution is:")
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
The Solution is:
[[ 5.]
[ 3.]
[-2.]]
###Markdown
**The price of one apple and one orange**
###Code
import numpy as np
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
#other step on how to solve linear equations with NumPy and linalg.solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = np.linalg.solve(A,B)
print(X)
import numpy as np
from scipy.linalg import solve #other option you can try using SciPy
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = solve (A,B)
print(X)
inv_A = np.linalg.inv(A) #Inverse of A
print(inv_A)
X = np.linalg.inv(A).dot(B) #unknown values of determining the x and y
print(X)
#other option that you can try
X = np.dot(inv_A,B)
print(X)
###Output
[[10.]
[15.]]
###Markdown
**Solving for three linear equations with unknown variables of x, y, and z**
###Code
#4x+3y+2z=25
#-2z+2y+3z=-10
#3x-5y+2x=-4
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
B = np.array([[25],[-10],[-4]])
print (B)
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
**The price of one apple and one orange**
###Code
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
#Method 1 - Using scipy
X = solve(A,B)
print(X)
#Method 2 - Direct solution
inv_A = np.linalg.inv(A)
print(inv_A)
X = np.linalg.inv(A).dot(B)
print(X)
#Method 3 - Using dot product
X = np.dot(inv_A,B)
print(X)
#Method 4 - Using linalg.solve
X = np.linalg.solve(A,B)
print(X)
###Output
_____no_output_____
###Markdown
**Solving for three linear equations with unknown variables of x, y, and z**
###Code
#4x+3y+2z=25
#-2z+2y+3z=-10
#3x-5y+2z=-4
matrix = ([[4,3,2],[-2,2,3],[3,-5,2]])
const = ([[25],[-10],[-4]])
ans = np.linalg.solve(matrix,const)
print(ans)
###Output
[[ 5.]
[ 3.]
[-2.]]
###Markdown
The Price of one apple and orange
###Code
import numpy as np
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
inv_A=np.linalg.inv(A)
print(inv_A)
X = np.linalg.inv(A).dot(B)
print(X)
X=np.dot(inv_A,B)
print(X)
import numpy as np
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = np.linalg.solve(A,B)
print(X)
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = solve(A,B)
print(X)
#4x+3y+2z=25
#-2z+2y+3z=-10
#3x-5y+2z=-4
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
B = np.array([[25],[-10],[-4]])
print(B)
x=solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[10.]
[15.]]
###Markdown
The price of one orange and one apple
###Code
import numpy as np
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
#other step on how to solve linear equations with NumPy and linalg.solve
import numpy as np
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = np.linalg.solve(A,B)
print(X)
import numpy as np
from scipy.linalg import solve #other option you can try using SciPy
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = solve (A,B)
print(X)
import numpy as np
inv_A = np.linalg.inv(A) #Inverse of A
print(inv_A)
X = np.linalg.inv(A).dot(B) #unknown values of determining the x and y
print(X)
#other option that you can try
import numpy as np
X = np.dot(inv_A,B)
print(X)
###Output
[[10.]
[15.]]
###Markdown
Solving for three linear equations with unknown variables of x, y, and z
###Code
#4x+3y+2z=25
#-2z+2y+3z=-10
#3x-5y+2x=-4
import numpy as np
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
B = np.array([[25],[-10],[-4]])
print (B)
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
The price of one apple and one orange
###Code
import numpy as np
from scipy.linalg import solve
A=np.array([[20,10],[17,22]])
B=np.array([[350],[500]])
print(A)
print(B)
X=solve(A,B)
print(X)
inv_A=np.linalg.inv(A)
print(inv_A)
X=np.linalg.inv(A).dot(B)
print(X)
X=np.dot(inv_A,B)
print(X)
###Output
[[10.]
[15.]]
###Markdown
Solving for three linear equation with unknown variables of x,y and z
###Code
#4x+3y+2z=25
#-2x+3y+3z=-10
#3x-5y+2z=-4
from scipy.linalg import solve
A=np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
B=np.array([[25],[-10],[-4]])
print(B)
X=solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
The price of one apple and one orange
###Code
import numpy as np
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
inv_A=np.linalg.inv(A)
print(inv_A)
X = np.linalg.inv(A).dot(B)
print(X)
X=np.dot(inv_A,B)
print(X)
import numpy as np
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = np.linalg.solve(A,B)
print(X)
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print()
print(B)
print()
X = solve(A,B)
print(X)
###Output
[[20 10]
[17 22]]
[[350]
[500]]
[[10.]
[15.]]
###Markdown
Solving for three linear equations with unknown variables of x, y and z
###Code
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
B = np.array([[25],[-10],[-4]])
print(B)
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
Price of one apple and one orange
###Code
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = solve(A,B)
print(X)
inv_A = np.linalg.inv(A)
print(inv_A)
X = np.linalg.inv(A).dot(B)
print(X)
X = np.dot(inv_A,B)
print(X)
###Output
[[10.]
[15.]]
###Markdown
Solving for three linear equation with unknown variables of x, y and z 4x+3y+2z=25-2x+2y+3z=-103x-5y+2z=-4
###Code
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
B = np.array([[25],[-10],[-4]])
print(A)
print(B)
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
**Price of one apple and one orange**
###Code
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print()
print(B)
print()
X = solve(A,B)
print(X)
inv_A = np.linalg.inv(A)
print(inv_A)
print()
X = np.linalg.inv(A).dot(B)
print(X)
X = np.dot(inv_A,B)
print(X)
###Output
[[10.]
[15.]]
###Markdown
**Solving for three linear equations with unknown variables of x, y, and z**
###Code
#4x+3y+2z=25
#-2x+2y+3z=-10
#3x-5y+2z=-4
import numpy as np
from scipy.linalg import solve
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
B = np.array([[25],[-10],[-4]])
print(A)
print()
print(B)
print()
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
The price of one apple and orange
###Code
import numpy as np
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
#other step on how to solve linear equations with NumPy and linalg.solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = np.linalg.solve(A,B)
print(X)
import numpy as np
from scipy.linalg import solve #other option you can try using SciPy
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = solve (A,B)
print(X)
inv_A = np.linalg.inv(A) #Inverse of A
print(inv_A)
X = np.linalg.inv(A).dot(B) #unknown values of determining the x and y
print(X)
#other option that you can try
X = np.dot(inv_A,B)
print(X)
#4x+3y+2z=25
#-2z+2y+3z=-10
#3x-5y+2x=-4
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
B = np.array([[25],[-10],[-4]])
print (B)
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
###Code
class Person:
def __init__(self, std, pre, mid, fin):
self.__std = std
self.__pre = pre
self.__mid = mid
self.__fin = fin
def Term(self):
print(self.__std, self.__pre, self.__mid, self.__fin)
print(" Student Term Grades (Prelim, Midterms, Finals):")
stu1 = Person("Student 1", 88, 89, 87)
stu2 = Person("Student 2:", 88, 87, 90)
stu3 = Person("Student 3:", 91, 90, 92)
stu1.Term()
stu2.Term()
stu3.Term()
class Student(Person):
def __init__(self, pre, mid, fin):
self.__pre = pre
self.__mid = mid
self.__fin = fin
def Grade(self):
return ((self.__pre + self.__mid + self.__fin)/3)
print(" Student Term Average: ")
std1 = Student(88, 89, 87)
print("Student 1: ", round(std1.Grade(), 2))
std2 = Student(88, 87, 90)
print("Student 2: ", round(std2.Grade(), 2))
std3 = Student(91, 90, 92)
print("Student 3: ", "{:.2f}".format(std3.Grade(), 2))
###Output
Student Term Grades (Prelim, Midterms, Finals):
Student 1 88 89 87
Student 2: 88 87 90
Student 3: 91 90 92
Student Term Average:
Student 1: 88.0
Student 2: 88.33
Student 3: 91.00
###Markdown
Application
###Code
import pandas as pd
import numpy as np
property_type = pd.read_csv("final_df.csv")
property_type1 = property_type.iloc[:,1:33]
for i in range(len(property_type1)):
for j in range(2, len(property_type1.columns)):
if type(property_type1.iloc[i,j]) != str:
continue
elif len(property_type1.iloc[i,j]) <= 4:
property_type1.iloc[i,j] = property_type1.iloc[i,j]
else:
property_type1.iloc[i,j] = property_type1.iloc[i,j].split(",")[0] + property_type1.iloc[i,j].split(",")[1]
property_type2 = property_type1.loc[:, ["Property Type", "Mean Price"]]
property_type2 = property_type2.groupby(["Property Type"]).mean()
plt.figure(figsize = (7,4))
plt.bar(property_type2.index, property_type2["Mean Price"], color=('red','yellow','orange','blue','green','purple','black','grey'))
plt.title("Mean Price of Different Property Types")
plt.xlabel("Property Type")
plt.xticks(rotation=90)
plt.ylabel("Mean Price")
plt.show()
import numpy as np
import pandas as pd
import dash
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
import webbrowser
from threading import Timer
import dash_table
import dash_table.FormatTemplate as FormatTemplate
import plotly.express as px
#Import datasets
df_details = pd.read_csv('dfclean_1adult.csv')
df_details = df_details.rename(columns = {'Unnamed: 0':'Name',
'reviews': 'no. of reviews'})
df_dates = pd.read_csv('final_df.csv').drop('Unnamed: 0', 1)
# Merge datasets
df = df_details.merge(df_dates, on='Name')
df = df.replace(to_replace = ['Y','N'],value = [1,0])
df.iloc[:,7:37] = df.iloc[:,7:37].apply(lambda x: x.astype(str))
df.iloc[:,7:37] = df.iloc[:,7:37].apply(lambda x: x.str.replace(',', '').astype(float), axis=1)
user_df = df.copy()
date_cols = user_df.columns[7:37]
hotel_types = user_df['Property Type'].unique()
features = ['Price'] + list(user_df.columns[2:5]) + list(user_df.columns[37:])
continuous_features = features[:9]
continuous_features_A = ['Price', 'Distance to Mall', 'Distance to MRT']
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
app.title = 'Hotel Booking'
def generate_table(dataframe, max_rows=5):
df_drop_link = dataframe.drop(columns='link')
return html.Table([
html.Thead(
html.Tr([html.Th(col) for col in df_drop_link.columns])
),
html.Tbody([
html.Tr([
html.Td(dataframe.iloc[i][col]) if col != 'Name' else html.Td(html.A(href=dataframe.iloc[i]['link'], children=dataframe.iloc[i][col], target='_blank')) for col in df_drop_link.columns
]) for i in range(min(len(dataframe), max_rows))
])
])
colors = {'background': '#111111', 'text': '#7FDBFF'}
app.layout = html.Div([
#introduction
html.Div([
html.H2(children='Hello!',
style={'color': colors['text']}),
#inputs for date and hotel type
html.Div([html.H4("Step 1: Input Date (eg. 4Nov): "),
dcc.Input(id='date-input', value='4Nov', type='text')],
style={'width':'30%', 'float':'left'}),
html.Div(id='date-output-hotel'),
html.Div([
html.H4('Step 2: Select Your Preferred Hotel Types:'),
dcc.Dropdown(id='hotel-input',
options=[{'label': i, 'value': i} for i in hotel_types],
value= hotel_types,
multi=True)],
style={'width':'70%', 'float':'right'}),
html.Br(), html.Br()
]),
#return available hotels for given date
html.Div([
html.Br(), html.Br(), html.Hr(),
dcc.Graph(id='output-submit'),
html.Hr(),
]),
#input top 3 features
html.Div([
html.H4(children='Step 3: Select Your Top 3 Features:'),
]),
html.Div([
dcc.Dropdown(
id='feature1',
options=[{'label': i, 'value': i} for i in features],
value= features[0]
), html.Br(),
dcc.Slider(id='weight1',
min= 10, max= 90, step= 10,
marks={i: '{}%'.format(i) for i in np.arange(10, 90, 10).tolist()},
value=50)
], style={"display": "grid", "grid-template-columns": "20% 10% 70%", "grid-template-rows": "50px"}
),
html.Div([
dcc.Dropdown(
id='feature2',
options=[{'label': i, 'value': i} for i in features],
value= features[1]
), html.Br(),
dcc.Slider(id='weight2',
min= 10, max= 90, step= 10,
marks={i: '{}%'.format(i) for i in np.arange(10, 90, 10).tolist()},
value=30)
], style={"display": "grid", "grid-template-columns": "20% 10% 70%", "grid-template-rows": "50px"}
),
html.Div([
dcc.Dropdown(
id='feature3',
options=[{'label': i, 'value': i} for i in features],
value= features[2]
), html.Br(),
dcc.Slider(id='weight3',
min= 10, max= 90, step= 10,
marks={i: '{}%'.format(i) for i in np.arange(10, 90, 10).tolist()},
value=20)
], style={"display": "grid", "grid-template-columns": "20% 10% 70%", "grid-template-rows": "50px"}
),
#return top 5 hotels recommended
html.Div([
html.Hr(),
html.H2(children='Top 5 Hotels Recommended For You',
style={'color': colors['text']}),
html.Div(id='output-feature'),
html.Hr()
])
])
#update available hotels for given date
@app.callback(Output('output-submit', 'figure'),
[Input('hotel-input', 'value'), Input('date-input', 'value')])
def update_hotels(hotel_input, date_input):
user_df = df.copy()
user_df = user_df[user_df[date_input].notnull()]
user_df = user_df[user_df['Property Type'].isin(hotel_input)]
plot_df = pd.DataFrame(user_df.groupby('Property Type')['Name'].count()).reset_index()
fig = px.bar(plot_df, x='Property Type', y='Name', color="Property Type", title="Hotel Types available on {}:".format(date_input))
fig.update_layout(transition_duration=500)
return fig
#update top 5 hotels recommended
@app.callback(Output('output-feature', 'children'),
[Input('hotel-input', 'value'), Input('date-input', 'value'),
Input('feature1', 'value'), Input('feature2', 'value'), Input('feature3', 'value'),
Input('weight1', 'value'), Input('weight2', 'value'), Input('weight3', 'value')])
def update_features(hotel_input, date_input, feature1, feature2, feature3, weight1, weight2, weight3):
user_df = df.copy()
user_df = user_df[user_df[date_input].notnull()]
user_df['Price'] = user_df[date_input]
user_df = user_df[user_df['Property Type'].isin(hotel_input)]
features= [feature1, feature2, feature3]
selected_features = features.copy()
selected_continuous = set(selected_features) & set(continuous_features)
for i in selected_continuous:
col = i + str(' rank')
if i in continuous_features_A:
user_df[col] = user_df[i].rank(ascending=False) #higher value, lower score
else:
user_df[col] = user_df[i].rank(ascending=True) #higher value, higher score
selected_features[selected_features.index(i)] = col #replace element in list name with new col name
#Scoring: weight * feature's score
user_df['Score'] = (((weight1/100) * user_df[selected_features[0]])
+ ((weight2/100) * user_df[selected_features[1]])
+ ((weight3/100) * user_df[selected_features[2]])).round(1)
#Score-to-Price ratio
user_df['Value_to_Price ratio'] = (user_df['Score'] / user_df['Price']).round(1)
user_df = user_df.sort_values(by=['Value_to_Price ratio'], ascending = False).reset_index()
features_result = [i for i in features if i != 'Price']
selected_features_result = [i for i in selected_features if i not in features_result]
user_df_results = user_df[['Name', 'Property Type', 'Price', 'Score', 'Value_to_Price ratio'] + ['link'] + features_result + selected_features_result]
return generate_table(user_df_results.head(5))
port = 8050
url = "http://127.0.0.1:{}".format(port)
def open_browser():
webbrowser.open_new(url)
if __name__ == '__main__':
Timer(0.5, open_browser).start();
app.run_server( debug= False, port=port)
###Output
_____no_output_____
###Markdown
Price Prediciton
###Code
import glob
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
%matplotlib inline
from sklearn.metrics import mean_squared_error, r2_score
from sklearn import datasets, linear_model
from sklearn.tree import DecisionTreeRegressor
from sklearn.linear_model import LogisticRegression
import random
import xgboost as xgb
dfs = glob.glob("*Novhotels.csv")
# for df in dfs:
train_features = pd.read_csv("10Novhotels.csv")
#Preliminary data cleaning
col_names = train_features.columns
list1 = []
for i in col_names:
prop_na = sum(train_features.loc[:,i].isnull())/train_features.loc[:,"Laundry Service"].count()
if prop_na >= .9:
list1.append(i)
title = ['Price', 'Property Type', 'Number of Stars', 'Review Score',
'Cleanliness', 'Distance to Mall', 'Distance to MRT',
'Early Check-in (Before 3pm)', 'Late Check-out (After 12pm)',
'Pay Later', 'Free Cancellation', 'Gym', 'Swimming Pool', 'Car Park',
'Airport Transfer', 'Breakfast', 'Hygiene+ (Covid-19)',
'24h Front Desk', 'Laundry Service', 'Bathtub', 'Balcony', 'Kitchen',
'TV', 'Internet', 'Air Conditioning', 'Ironing', 'Non-Smoking']
train_features = train_features.drop(columns = list1)
train_features = train_features.drop(['Unnamed: 0', 'Name'], axis = 1)
#train_features.rename(columns={'*Nov': 'Price'}, inplace=True)
train_features.columns = title
pd.options.display.max_columns = None
pd.options.display.max_rows = None
# display(train_features.head())
train_features = train_features.replace(['Y', 'N'], [1, 0])
train_features = train_features[train_features["Price"].notna()]
train_features["Price"] = train_features["Price"].astype(str).str.replace(',','')
# train_features["Price"] = train_features["Price"].str.replace(',','')
train_features["Price"] = pd.to_numeric(train_features["Price"])
#Change stars to categorical
train_features["Number of Stars"] = train_features["Number of Stars"].astype(str)
#One hot encoding
train_features = pd.get_dummies(train_features)
#Check for missing data
# check = train_features.isnull().sum()
mean_val_distmall = round(train_features['Distance to Mall'].mean(),0)
train_features['Distance to Mall']=train_features['Distance to Mall'].fillna(mean_val_distmall)
mean_val_distmrt = round(train_features['Distance to MRT'].mean(),0)
train_features['Distance to MRT']=train_features['Distance to MRT'].fillna(mean_val_distmrt)
mean_val_price = round(train_features['Price'].mean(),0)
train_features['Price']=train_features['Price'].fillna(mean_val_price)
# print(train_features.isnull().sum())
# Create correlation matrix
corr_matrix = train_features.corr().abs()
# Select upper triangle of correlation matrix
upper = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(np.bool))
# Find features with correlation greater than 0.95
to_drop = [column for column in upper.columns if any(upper[column] > 0.95)]
# Drop features
train_features.drop(to_drop, axis=1, inplace=True)
labels = []
for i in train_features.columns:
labels.append(i)
labels.remove('Price')
training_features = labels
target = 'Price'
random.seed(5)
#Perform train-test split
#creating 90% training data and 10% test data
X_train, X_test, Y_train, Y_test = train_test_split(train_features[training_features], train_features[target], train_size = 0.9)
colsample = np.arange(0.0, 1.1, 0.1)
learningrate = np.arange(0.0, 1.1, 0.1)
maxdepth = list(range(1, 1000))
alpha_val = list(range(1, 1000))
n_estimators_val = list(range(1, 1000))
# for a in range(len(maxdepth)):
xg_reg = xgb.XGBRegressor(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
max_depth = 5, alpha = 1, n_estimators = 20)
xg_reg.fit(X_train,Y_train)
predicted = xg_reg.predict(X_test)
# print(n_estimators_val[a])
#the mean squared error
print('Mean squared error: %.2f' % mean_squared_error(Y_test, predicted))
#explained variance score: 1 is perfect prediction
print('R square score: %.2f' % r2_score(Y_test,predicted))
df = pd.read_csv("prices_1adult.csv")
df = df.replace(to_replace ="[]", value =np.nan)
df = pd.melt(df, id_vars='Unnamed: 0')
df.columns = ["Name","Date","Price"]
df.head()
df_second = pd.read_csv("Predicted_Price.csv")
df_second.head()
df_second = df_second.drop_duplicates()
df_merge_col = pd.merge(df, df_second, on=['Name','Date'])
# df_merge_col.to_csv("Predicted_Price.csv")
###Output
_____no_output_____
###Markdown
Fashionet.AI Rev 1an app for clothes classification and colour recognition by Yannis Georgas 5-Jan-2019| TABLE OF CONTENTS: lets take a look below at the jupyter notebook structure: SECTION 1 Load the trained model SECTION 2 Predict object class using image or camera SECTION 3 Understand the dominant colours of the object using k-means SECTION 4: What next? SECTION 1 Load the trained Model Lets load our pre-trained model here
###Code
import cv2
import numpy as np
from keras.models import load_model, Model
#for MacOS the below lines required to run without jupyter notebook crashing
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
my_model = load_model('fashion_model.h5')
###Output
WARNING:tensorflow:From /Users/ygeorgas/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
WARNING:tensorflow:From /Users/ygeorgas/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
WARNING:tensorflow:From /Users/ygeorgas/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:131: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
WARNING:tensorflow:From /Users/ygeorgas/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:133: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.
WARNING:tensorflow:From /Users/ygeorgas/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
WARNING:tensorflow:From /Users/ygeorgas/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:3976: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.
WARNING:tensorflow:From /Users/ygeorgas/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.
WARNING:tensorflow:From /Users/ygeorgas/anaconda3/lib/python3.7/site-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
WARNING:tensorflow:From /Users/ygeorgas/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_grad.py:1250: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
###Markdown
You can also print a summary of your model by running the following code.
###Code
my_model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) (None, 250, 250, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 250, 250, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 250, 250, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 125, 125, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 125, 125, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 125, 125, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 62, 62, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 62, 62, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 62, 62, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 62, 62, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 31, 31, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 31, 31, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 31, 31, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 31, 31, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 15, 15, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 15, 15, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 15, 15, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 15, 15, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
sequential_3 (Sequential) (None, 6) 6424326
=================================================================
Total params: 21,139,014
Trainable params: 13,503,750
Non-trainable params: 7,635,264
_________________________________________________________________
###Markdown
---------------------------------------------------------------------------------------------------------- SECTION 2 Predict object class using image or camera Now that the trained model is loaded lets see our photo that we wish to classify
###Code
import matplotlib.pyplot as plt
%matplotlib inline
from PIL import Image
image = Image.open('./blousetest.jpg')
plt.imshow(image)
class_names = ['Blouse', 'Hoodie', 'T-Shirt']
width = 250
height = 250
###Output
_____no_output_____
###Markdown
we open the camera to do classification of what we wear:
###Code
import time
# get the reference to the webcam
camera = cv2.VideoCapture(0)
camera_height = 500
while(True):
# read a new frame
_, frame = camera.read()
# flip the frameq
frame = cv2.flip(frame, 1)
# rescaling camera output
aspect = frame.shape[1] / float(frame.shape[0])
res = int(aspect * camera_height) # landscape orientation - wide image
frame = cv2.resize(frame, (res, camera_height))
# add rectangle
cv2.rectangle(frame, (300, 75), (650, 425), (240, 100, 0), 2)
# get ROI
roi = frame[75+2:425-2, 300+2:650-2]
# parse BRG to RGB
roi = cv2.cvtColor(roi, cv2.COLOR_BGR2RGB)
# resize
roi = cv2.resize(roi, (width, height))
# predict!
roi_X = np.expand_dims(roi, axis=0)
predictions = my_model.predict(roi_X)
type_1_pred, type_2_pred, type_3_pred = predictions[0]
# Blouse
type_1_text = '{}: {}%'.format(class_names[0], int(type_1_pred*100))
cv2.putText(frame, type_1_text, (70, 170),
cv2.FONT_HERSHEY_SIMPLEX, 0.6, (240, 240, 240), 2)
# Hoodie
type_2_text = '{}: {}%'.format(class_names[1], int(type_2_pred*100))
cv2.putText(frame, type_2_text, (70, 200),
cv2.FONT_HERSHEY_SIMPLEX, 0.6, (240, 240, 240), 2)
# Shirt
type_3_text = '{}: {}%'.format(class_names[2], int(type_3_pred*100))
cv2.putText(frame, type_3_text, (70, 230),
cv2.FONT_HERSHEY_SIMPLEX, 0.6, (240, 240, 240), 2)
# show the frame
cv2.imshow("Test out", frame)
key = cv2.waitKey(1)
# quit camera if 'q' key is pressed
if key & 0xFF == ord("q"):
break
camera.release()
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
or we can load a photo from our computer to classify it
###Code
import numpy as np
from keras.preprocessing import image
from keras.applications.imagenet_utils import preprocess_input
from keras.models import load_model
from keras.preprocessing.image import img_to_array, load_img
from matplotlib.pyplot import imshow
import matplotlib.pyplot as plt
import imageio
#test_model = load_model('fine_tune_model_DvCvS.h5')
#img = load_img('image_to_predict.jpg',False,target_size=(img_width,img_height))
img_width,img_height = 250, 250
img_path = 'blousetest.jpg'
img = load_img(img_path,False,target_size=(img_width,img_height)) # parameter: grayscale=False
x = img_to_array(img)
x = np.expand_dims(x, axis=0)
print('Input image shape:', x.shape)
#preds = test_model.predict_classes(x)
#prob = test_model.predict_proba(x)
#print(preds, probs)
my_image = imageio.imread(img_path)
imshow(my_image)
print(' P = [Blouses, Hoodies, Tshirts]')
print("class prediction P =", my_model.predict(x))
###Output
Input image shape: (1, 250, 250, 3)
P = [Blouses, Hoodies, Tshirts]
class prediction P = [[1. 0. 0. 0. 0. 0.]]
###Markdown
---------------------------------------------------------------------------------------------------------- SECTION 3 Understand the dominant colours of the object using k-means (taken from rmotr.com) Now we will use k-means algorithm to find the colours of the photo used above
###Code
import os
import numpy as np
from matplotlib import pyplot as plt
from PIL import Image
from collections import Counter
from sklearn.cluster import KMeans
%matplotlib inline
###Output
_____no_output_____
###Markdown
Changing the RGB to HTML Hex Color Codeshttps://www.w3schools.com/colors/colors_hexadecimal.asp
###Code
# Utility function, rgb to hex
def rgb2hex(rgb):
hex = "#{:02x}{:02x}{:02x}".format(int(rgb[0]), int(rgb[1]), int(rgb[2]))
return hex
###Output
_____no_output_____
###Markdown
Image color extraction using Scikit LearnWe'll see how simple it is to identify the most important colors in an image using the K Means, unsupervised ML model from the Scikit Learn package.Step 1: Define some meta variablesYou can play around with the following variables to generate different results. CLUSTERS is probably the most important one, as it defines the number of colors we'll extract from the image.
###Code
PATH = './blousetest.jpg'
WIDTH = 250
HEIGHT = 250
CLUSTERS = 6 # max number of colours (clusters of colours) we would like to identify
###Output
_____no_output_____
###Markdown
Step 2: Open the image using PillowWe'll use the Pillow library to open and manipulate the image.
###Code
image = Image.open(PATH)
image.size
print("Loaded {f} image. Size: {s:.2f} KB. Dimensions: ({d})".format(
f=image.format, s=os.path.getsize(PATH) / 1024, d=image.size))
###Output
Loaded JPEG image. Size: 173.02 KB. Dimensions: ((990, 1228))
###Markdown
Step 3: Resize imageThe ML model will take considerably longer if the image is large. We'll try to resize it keeping the aspect ratio.
###Code
def calculate_new_size(image):
if image.width >= image.height:
wpercent = (WIDTH / float(image.width))
hsize = int((float(image.height) * float(wpercent)))
new_width, new_height = WIDTH, hsize
else:
hpercent = (HEIGHT / float(image.height))
wsize = int((float(image.width) * float(hpercent)))
new_width, new_height = wsize, HEIGHT
return new_width, new_height
calculate_new_size(image)
new_width, new_height = calculate_new_size(image)
image.resize((new_width, new_height), Image.ANTIALIAS)
image = image.resize((new_width, new_height), Image.ANTIALIAS)
###Output
_____no_output_____
###Markdown
Step 4: Creating the numpy arraysOur ML Model needs the image as an array of pixels. We explained this in detail in one of our workshops.https://www.youtube.com/watch?v=2Q4L3MtdAbY
###Code
img_array = np.array(image)
img_vector = img_array.reshape((img_array.shape[0] * img_array.shape[1], 3))
###Output
_____no_output_____
###Markdown
Step 5: Create the model and train itWe're ready for the true ML part. We'll create a model using N clusters and extract the colors.
###Code
model = KMeans(n_clusters=CLUSTERS)
labels = model.fit_predict(img_vector)
label_counts = Counter(labels)
total_count = sum(label_counts.values())
###Output
_____no_output_____
###Markdown
These are the colors extracted:
###Code
hex_colors = [
rgb2hex(center) for center in model.cluster_centers_
]
hex_colors
###Output
_____no_output_____
###Markdown
And this is the proportion of each color:
###Code
list(zip(hex_colors, list(label_counts.values())))
###Output
_____no_output_____
###Markdown
Final Result:We can see now the final result of the color extracted:
###Code
plt.figure(figsize=(14, 10))
plt.subplot(221)
plt.imshow(image)
plt.axis('off')
plt.subplot(222)
plt.pie(label_counts.values(), labels=hex_colors, colors=[color / 255 for color in model.cluster_centers_], startangle=90)
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
###Code
import numpy as np
from scipy.linalg import solve
A = np.array([[4,5],[3,-2]])
B = np.array([[7],[11]])
print(A)
print()
print(B)
print()
X = solve(A,B)
print(X)
inv_A = np.linalg.inv(A)
print(inv_A)
print()
X = np.linalg.inv(A).dot(B)
print(X)
X = solve(A,B)
print(X)
###Output
_____no_output_____
###Markdown
The price of one apple and one orange
###Code
import numpy as np
from scipy.linalg import solve
A=np.array([[20,10],[17,22]])
B=np.array([[350],[500]])
print(A)
print(B)
X=solve(A,B)
print(X)
inv_A=np.linalg.inv(A)
print(inv_A)
X=np.linalg.inv(A).dot(B)
print(X)
X=np.dot(inv_A,B)
print(X)
###Output
[[ 5.]
[ 3.]
[-2.]]
###Markdown
Solving for three linear equation with unknown variables of x, y, and z
###Code
#4x+3y+2z=25
#-2x+3y+3z=-10
#3x-5y+2z=-4
from scipy.linalg import solve
A=np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
B=np.array([[25],[-10],[-4]])
print(B)
X=solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
Price of one apple and one orange
###Code
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print()
print(B)
print()
X = solve(A,B)
print(X)
inv_A = np.linalg.inv(A)
print(inv_A)
print()
X = np.linalg.inv(A).dot(B)
print(X)
X = np.dot(inv_A,B)
print(X)
###Output
[[10.]
[15.]]
###Markdown
Solving for three linear equations with unknown variables of x, y, and z
###Code
#4x+3y+2z=25
#-2x+2y+3z=-10
#3x-5y+2z=-4
import numpy as np
from scipy.linalg import solve
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
B = np.array([[25],[-10],[-4]])
print(A)
print()
print(B)
print()
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
The price of one apple and one orange.
###Code
import numpy as np #Matrix Operator.
from scipy.linalg import solve #Open Library
A = np.array([[20,10],[17,22]]) #Creation of 2x2 matrix named as matrix A.
B = np.array([[350],[500]]) #Creation of 2x2 matrix named as matrix B.
print(A) #Display matrix A.
print(B) #Displays matrix B.
X = solve(A,B) #Solves for A and B
print(X) #Displays the final output.
#X = np.linalg.solve(A,B) #Direct way for solving functions.
#print(X) #Displays the final output.
inv_A=np.linalg.inv(A) #To get the inverse of matrix A.
print(inv_A) #Displays the inverse of matrix A.
X = np.linalg.inv(A).dot(B) #Inverses matrix A then, getsthe dot product of inv_A and matrix B.
print(X) #Displays the final output.
X = np.dot(inv_A,B) #Another way of inversing and getting the dot product of matrices.
print(X) #Displays the final output.
###Output
[[10.]
[15.]]
###Markdown
Solving for three linear equation with unknown variables of x, y, and z.
###Code
#4x+3y+2z=25
#-2x+2y+3z=-10
#3x-5y+2z=-4
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]]) #Creation of 3x3 matrix named as matrix A.
print(A) #Display matrix A
B = np.array([[25],[-10],[-4]]) #Creation of 3x3 matrix named as matrix B.
print(B) #Display matrix B
X = solve(A,B) #From the scipy library, that solve the function.
print(X) #Displays the final output.
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
The price of one apple and one orange
###Code
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = solve(A,B)
print(X)
inv_A = np.linalg.inv(A)
print(inv_A)
X = np.linalg.inv(A).dot(B)
print(X)
X = np.dot(inv_A,B)
print(X)
###Output
[[10.]
[15.]]
###Markdown
Solving for three linear equations with unkown variables of x, y, and z
###Code
#4x+3y+2z = 25
#-2x+2y+3z = -10
#3x-5y+2z = -4
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
B = np.array([[25],[-10],[-4]])
print(A)
print(B)
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
###Code
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
inv_A = np.linalg.inv(A)
print(inv_A)
X = np.linalg.inv(A).dot(B)
print(X)
X = np.dot(inv_A,B)
print(X)
###Output
[[10.]
[15.]]
###Markdown
Solving for three linear equations with unknown variables of x,y,z
###Code
from scipy.linalg import solve
#4x+3y+2z = 25
#-2z+2y+3z = -10
#3x-5y+2z = -4
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
B = np.array([[25],[-10],[-4]])
print(B)
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
**The price of one apple and one orange**
###Code
import numpy as np
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
#other step on how to solve linear equations with NumPy and linalg.solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = np.linalg.solve(A,B)
print(X)
import numpy as np
from scipy.linalg import solve #other option you can try using SciPy
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = solve (A,B)
print(X)
inv_A = np.linalg.inv(A) #Inverse of A
print(inv_A)
X = np.linalg.inv(A).dot(B) #unknown values of determining the x and y
print(X)
#other option that you can try
X = np.dot(inv_A,B)
print(X)
###Output
[[10.]
[15.]]
###Markdown
**Solving for three linear equations with unknown variables of x, y, and z**
###Code
#4x+3y+2z=25
#-2z+2y+3z=-10
#3x-5y+2x=-4
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
B = np.array([[25],[-10],[-4]])
print (B)
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
Price of one apple and one orange
###Code
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print()
print(B)
print()
X = solve(A,B)
print(X)
inv_A = np.linalg.inv(A)
print(inv_A)
print()
X = np.linalg.inv(A).dot(B)
print(X)
X = np.dot(inv_A,B)
print(X)
###Output
[[10.]
[15.]]
###Markdown
Solving for three linear equations with unknown variables of x, y, and z
###Code
#4x+3y+2z=25
#-2x+2y+3z=-10
#3x-5y+2z=-4
import numpy as np
from scipy.linalg import solve
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
B = np.array([[25],[-10],[-4]])
print(A)
print()
print(B)
print()
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
The price of one apple and one orange
###Code
import numpy as np
from scipy.linalg import solve
A = np.array(([20,10],[17,22]))
B = np.array(([350],[500]))
print(A)
print(B)
X = np.linalg.solve(A,B)
print(X)
inv_A = np.linalg.inv(A)
print(inv_A)
X = np.linalg.inv(A).dot(B)
print(X)
X = np.dot(inv_A,B)
print(X)
###Output
[[10.]
[15.]]
###Markdown
Solving for three linear equations with unknown variables of x, y, and z
###Code
#4x + 3y+ 2z = 25
#-2x + 2y+ 3z = -10
#3x - 5y + 2z = -4
A = np.array(([4,3,2],[-2,2,3],[3,-5,2]))
B = np.array(([25],[-10],[-4]))
print(A)
print(B)
X = np.linalg.solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
Objectives : 1-is there a direct correlation between Salary and formal Education?2- is there a direct correlation between Earnings and employment status ( full time / part time)?3- relation between Salary and years of experience4- Upgrade your skills to Earn more
###Code
# for this project we will follow the CRISP-DM Steps
# Business Understanding
# Data Understanding
# Data Preparation
# Data Modeling
# Evaluation
# Deployment
# Importing Libraries That we may need some or all of them
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Importing Dataset
df = pd.read_csv('survey_results_public.csv')
schema = pd.read_csv('survey_results_schema.csv')
#Drop the rows with missing salaries
# as most of the questions are related to the salary
# we dropped all the rows with missing values
# we didn't impute missing values to avoid biasing the results
df = df.dropna(subset=['Salary'], axis=0)
#Drop columns with all NaN values
# cleaning the data from unncessary columns
#for future use or update to this analysis
#also columns with no values provides no use
df = df.dropna(how='all', axis=1)
# Creating copies of Clean dataset
question_1 = df.copy(deep=True)
question_2 = df.copy(deep=True)
question_3 = df.copy(deep=True)
question_4 = df.copy(deep=True)
question_5 = df.copy(deep=True)
# Having a look at the data for understanding
df.head()
# Checking for nulls in Salary column
df.Salary.isnull().mean() == 0
#Descriptive statistics include those that summarize the central tendency,
#dispersion and shape of a dataset’s distribution, excluding NaN values.df.describe()
df.describe()
# Function to understand Data in each column
# Function that returns column from survey schema
def get_description(column_name, schema=schema):
'''
SUMMARY:
Returns the description of a column
INPUT:
schema - pandas dataframe with the schema of the developers survey
column_name - string - the name of the column you would like to know about
OUTPUT:
desc - string - the description of the column
'''
desc = list(schema[schema['Column'] == column_name]['Question'])[0]
return desc
#Function used in Preparing Data
# Function to group with it
def grouping_function(data, column_name):
"""
SUMMARY:
Returns a grouped dataframe
INPUT:
data (object): Dataframe
column_name (char): Column which is to be grouped by
Returns:
GroupBy object with mean
"""
grouped_df = data.groupby([column_name]).mean().reset_index()
return grouped_df
###Output
_____no_output_____
###Markdown
1-Education versus Salary Correlation we need to get a relation between Educational level and Salary in this case most relevant colomns are 'Formal Education'and 'Salary'
###Code
# getting description of used colomns for better Data understanding
print(get_description('FormalEducation'))
print(get_description('Salary'))
# Grouping our Data and placing it in a new data set
S_1 = grouping_function(question_1, 'FormalEducation')
R_1 = S_1[['FormalEducation', 'Salary']]
R_1.sort_values('Salary')
R_1.plot.barh(x='FormalEducation', y='Salary', rot=0)
plt.title("Education versus Salary Correlation?");
###Output
_____no_output_____
###Markdown
It is very obevious that developers with Higher fornaml educational degrees ( Doctoral Degrees ) earns the highest with Salary approximately USD 79K followed by Primary/Elementary School graduates at USD 63K Also that university study with or without Bachelor's degree earns almost the same showing that learning isn't always about formal degrees as it is about understanding dedication and hard work 2- is there a direct correlation between Earnings and employment status ( full time / part time? for the above question we will use salary colomn and employment status colomns to get a better understanding for this question
###Code
# getting description of used colomns for better Data understanding
print(get_description('EmploymentStatus'))
print(get_description('Salary'))
# Grouping our Data and placing it in a new data set
S_2 = grouping_function(question_2, 'EmploymentStatus')
R_2 = S_2[['EmploymentStatus', 'Salary']]
R_2.sort_values('Salary')
R_2.plot.barh(x='EmploymentStatus', y='Salary', rot=0)
plt.title("Employment Type versus Salary Correlation?");
###Output
_____no_output_____
###Markdown
it may seem that there is a direct relation between employment type and Earnings but taking in consideration that many of people who participated in the survey such as freelancer and retired ones don't have there salary so the results of the above question are biased and will be ignored in the Blog Post 3- relation between Salary and years od experience
###Code
# getting description of used colomns for better Data understanding which are Salary and YerasProgram
print(get_description('YearsProgram'))
print(get_description('Salary'))
# Grouping our Data and placing it in a new data set
S_3 = grouping_function(question_3, 'YearsProgram')
R_3 = S_3[['YearsProgram', 'Salary']]
R_3.sort_values('Salary')
R_3.plot.barh(x='YearsProgram', y='Salary', rot=0)
plt.title("relation between Salary and years of experience ?");
###Output
_____no_output_____
###Markdown
4- Upgrade your skills to Earn more People always assume that if you have mastered more than one languages, you will have some benefits in terms of Earnings and Income . Let's figure out if they are right. Let's work with HaveWorkedLanguage and Salary column:
###Code
def lang_count(data):
"""
INPUT:
data (object): 2-D Dataframe
Returns:
GroupBy object with mean
"""
# Counting number of languages
data['LanguageCount'] = data['HaveWorkedLanguage'].str.count(';') + 1
# Dropping NaN in LanguageCount
data.dropna(subset=['LanguageCount'], inplace=True)
data.reset_index(drop=True, inplace=True)
print('Responses:', data['Salary'].shape[0])
# Grouping Salary according to number of languages
grouped_salary = grouping_function(data, 'LanguageCount')
# Dropping NaN in Salary
grouped_salary.dropna(subset=['Salary'], inplace=True)
grouped_salary.reset_index(drop=True, inplace=True)
# Fitlering Outliers
salary_quantile = grouped_salary["Salary"].quantile(0.1)
# Considering the first 10 rows
grouped_salary = grouped_salary[grouped_salary["Salary"] > salary_quantile].head(10)
return grouped_salary
result_4 = lang_count(question_4)
R_4 = result_4[['LanguageCount', 'Salary']]
R_4.sort_values('Salary')
# Visualisation for objective 4
R_4.plot.barh(x ='LanguageCount',y = 'Salary',rot=0)
plt.title("more language = more income?");
###Output
_____no_output_____
###Markdown
Suppose, a market seller sold 20 apples and 10 orange in one day for total 350 pesos. The next day he sold 17 apples and 22 oranges for 500 pesos. If the price of the fruits remained unchanged on both the days, what was the price of one apple and one orange.
###Code
# 20x+10y = 350
# 17x+22y = 500
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],
[17,22]])
B = np.array([[350],
[500]])
print(A,'\n\n', B, '\n')
# eq01 = np.linalg.inv(A).dot(B)
# Past Method
# eq01 = np.linalg.solve(A, B)
# Numpy
eq01 = solve(A, B)
print(eq01)
# 4x+3y+2z = 25
# -2z+2y-3z = -10
# 3x-5y+2z = -4
# import numpy as np
# from scipy.linalg import solve
# Using what's above
A = np.array([[4,3,2],
[-2,2,3],
[3,-5,2]])
B = np.array([[25],
[-10],
[-4]])
print(A,'\n\n', B, '\n')
eq02 = solve(A, B)
print(eq02)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
**The price of one apple and one orange**
###Code
import numpy as np
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = np.linalg.solve(A,B)
print(X)
inv_A=np.linalg.inv(A)
print(inv_A)
X = np.linalg.inv(A).dot(B)
print(X)
X=np.dot(inv_A,B)
print(X)
###Output
[[10.]
[15.]]
###Markdown
**Solving for three linear equation with unknown variable of x,y, and z**
###Code
#4x+3y+2z=25
#-2z+2y+3z=-10
#3x-5y+2z=-4
import numpy as np
from scipy.linalg import solve
A=np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
print()
B=np.array([[25],[-10],[-4]])
print(B)
print()
X= solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
###Code
import numpy as np
from scipy.linalg import solve
fruits_sold = np.array([[20,10], #number of apples and oranges sold per day / coefficients of the linear equations
[17,22]])
total_price = np.array([[350], #total revenue per day / constants of the linear equations
[500]])
price_of_apple_orange = np.linalg.inv(fruits_sold) @ total_price #get price of apple and orange / values of the unknown variables
#by getting the dot product of the inverse the matrix fruits
#sold and matrix total price
print(price_of_apple_orange)
#checking
checking = fruits_sold @ price_of_apple_orange #dot product of fruits sold and the prices for an apple and an orange
#to check if it's equal to the constants of the given linear equations
print(checking)
if 20*10 + 10*15 == 350:
print('True')
else:
print('False')
coefficients = np.array([[4,3,2], #matrix of the coefficients of the linear equations
[-2,2,3],
[3,-5,2]])
constants = np.array([[25], #matrix of the constants of the linear equations
[-10],
[-4]])
unknown_variables = solve(np.linalg.inv(coefficients), constants) #use solve to solve for the values of the
#unknown variables from the inverse of the
#matrix coefficients and the matrix constants
print(unknown_variables) #print the values of the unknown variables
verify = solve(coefficients, unknown_variables) #verify the values of the unknown variables by using solve
#to solve the matrix coefficients and unknown variables values
#to check if it's equal to the constants of the given linear equations
print(verify)
###Output
[[ 25.]
[-10.]
[ -4.]]
###Markdown
The price of one apple and one orange
###Code
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = solve(A,B)
print(X)
inv_A = np.linalg.inv(A)
print(inv_A)
X = np.linalg.inv(A).dot(B)
print(X)
X = np.dot(inv_A,B)
print(X)
###Output
[[10.]
[15.]]
###Markdown
Solving for three linear equation with unknown variables x,y,z
###Code
#4x+3y+2z=25
#-2z+2y+3z=-10
#3x-5y+2z=-4
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
B = np.array([[25],[-10],[-4]])
print(B)
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
the price of one apple and one orange
###Code
import numpy as np
A = np.array([[20,10], [17, 22]])
B = np.array([[350], [500]])
print(A)
print(B)
X = np.linalg.solve(A,B)
print(X)
inv_A = np.linalg.inv(A)
print(inv_A)
X = np.linalg.inv(A).dot(B)
print(X)
X = np.dot(inv_A, B)
print(X)
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = solve(A,B)
print(X)
#4x+3y+2z=25
#-2x+2y+3z=-10
#3x-5y+2z=-4
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
B = np.array([[25],[-10],[-4]])
print(B)
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
The price of one apple and one orange
###Code
import numpy as np
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
#Another step on how to solve linear equations with NumPy and linalg.solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = np.linalg.solve(A,B)
print(X)
import numpy as np
from scipy.linalg import solve #Another option you can try using SciPy
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print(B)
X = solve (A,B)
print(X)
inv_A = np.linalg.inv(A) #Inverse of matrix A
print(inv_A)
X = np.linalg.inv(A).dot(B) #Unknown values of determining x and y
print(X)
#Another option you can try
X = np.dot(inv_A,B)
print(X)
###Output
[[10.]
[15.]]
###Markdown
Solving for three linear equations with unknown variables of x, y, and z
###Code
#Three linear equations with x, y, and z are unknown
#4x+3y+2z=25
#-2z+2y+3z=-10
#3x-5y+2x=-4
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(A)
B = np.array([[25],[-10],[-4]])
print (B)
X = solve(A,B)
print(X)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
###Markdown
The price of one apple and one orange
###Code
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
X = solve(A,B)
print(A,"\n")
print(B,"\n")
print(X)
invA = np.linalg.inv(A)
print(invA, "\n")
X = np.linalg.inv(A).dot(B)
print(X)
X = np.dot(invA,B)
print(X)
###Output
[[10.]
[15.]]
###Markdown
Solving for three linear equations with unknown varaibles x, y, and z 4x+3y+2z=25-2x+2y+3z=-103x-5y+2z=-4
###Code
a = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
print(a,"\n")
b = np.array([[25],[-10],[-4]])
print(b,"\n")
x = solve(a,b)
print(x)
###Output
[[ 4 3 2]
[-2 2 3]
[ 3 -5 2]]
[[ 25]
[-10]
[ -4]]
[[ 5.]
[ 3.]
[-2.]]
|
notebooks/pipeline_plot_results_maps.ipynb | ###Markdown
Plot the results of a Daedalus simulation on mapsBefore running this notebook, you need to run a simulation using `Daedalus` library. Please refer to [README](https://github.com/alan-turing-institute/daedalus/blob/master/README.md), Section: `Run Daedalus via command line`. After running the simulation, an `output` directory is created with the following structure:```bashoutput└── E08000032 ├── config_file_E08000032.yml ├── ssm_E08000032_MSOA11_ppp_2011_processed.csv └── ssm_E08000032_MSOA11_ppp_2011_simulation.csv └── year_1 └── ssm_E08000032_MSOA11_ppp_2011_simulation_year_1.csv └── year_2 └── ssm_E08000032_MSOA11_ppp_2011_simulation_year_2.csv```Here, we will plot the results stored in these files on maps. **WARNING**We use the `cartopy` library to plot maps in this notebook. `cartopy` is not installed by default. Please follow the instructions here:https://scitools.org.uk/cartopy/docs/latest/installing.htmland make sure that `cartopy` can be imported in the following cell:
###Code
import cartopy.crs as ccrs
import datetime
import matplotlib.pyplot as plt
from pyproj import Transformer
import pandas as pd
###Output
_____no_output_____
###Markdown
Migrants pool**WARNING**In this notebook, we will work with **re-assigned** population files. If you have not run the following line:```bash!python ../scripts/validation.py --simulation_dir ../output --persistent_data_dir ../persistent_data```you need to run it now which creates this file: `../output/E08000032/ssm_E08000032_MSOA11_ppp_2011_simulation_reassigned.csv`
###Code
pop = pd.read_csv('../output/E08000032/ssm_E08000032_MSOA11_ppp_2011_simulation_reassigned.csv')
print(f"Number of rows: {len(pop)}")
pop.head()
migrant_pool = pop[pop['internal_outmigration'] == 'Yes']
###Output
_____no_output_____
###Markdown
Prepare lat/lon of MSOAs
###Code
transformer = Transformer.from_crs('EPSG:27700', 'EPSG:4326')
def calcLatLon(row, transformer):
return transformer.transform(row["X"], row["Y"])
# read centroid file
msoa_centroids = pd.read_csv("../persistent_data/Middle_Layer_Super_Output_Areas__December_2011__Population_Weighted_Centroids.csv")
# Calculate coordinates based on centroid file
msoa_centroids["coord"] = \
msoa_centroids.apply(calcLatLon, transformer=transformer, axis=1)
msoa_centroids[["lat", "lon"]] = \
pd.DataFrame(msoa_centroids['coord'].to_list(), columns=["lat", "lon"])
msoa_centroids.head()
# Previous MSOA
migrant_pool[["prev_MSOA_lat", "prev_MSOA_lon"]] = \
migrant_pool[["previous_MSOA_locations"]].merge(msoa_centroids[["msoa11cd", "lat", "lon"]],
left_on="previous_MSOA_locations",
right_on="msoa11cd")[["lat", "lon"]]
# Current MSOA
migrant_pool[["MSOA_lat", "MSOA_lon"]] = \
migrant_pool[["MSOA"]].merge(msoa_centroids[["msoa11cd", "lat", "lon"]],
left_on="MSOA",
right_on="msoa11cd")[["lat", "lon"]]
migrant_pool.head()
###Output
_____no_output_____
###Markdown
Plots
###Code
uk_extent = [-10, 3, 49, 59]
plain_crs = ccrs.PlateCarree()
plt.figure(figsize=(10, 20))
ax = plt.axes(projection=plain_crs)
ax.coastlines(resolution='50m')
ax.gridlines()
ax.set_extent(uk_extent, crs=plain_crs)
ax.scatter(migrant_pool["prev_MSOA_lon"],
migrant_pool["prev_MSOA_lat"],
color='blue', linewidth=2,
marker='o', alpha=0.2,
transform=plain_crs)
plt.title("Origin", size=24)
plt.figure(figsize=(10, 20))
ax = plt.axes(projection=plain_crs)
ax.coastlines(resolution='50m')
ax.gridlines()
ax.set_extent(uk_extent, crs=plain_crs)
ax.scatter(migrant_pool["MSOA_lon"],
migrant_pool["MSOA_lat"],
color='red', linewidth=2,
marker='o', alpha=0.2,
transform=plain_crs)
plt.title("Destination", size=24)
plt.show()
# --- input
min_time = "2011-01-01"
max_time = datetime.datetime.strptime("2011-12-31", "%Y-%m-%d")
# intervals for plotting (in days)
interval_in_days = 100
uk_extent = [-10, 3, 49, 59]
# ---
plain_crs = ccrs.PlateCarree()
curr_time = datetime.datetime.strptime(min_time, "%Y-%m-%d")
time_axis = []
while curr_time <= max_time:
time_axis.append(curr_time)
migrant_pool_curr = migrant_pool[migrant_pool["last_outmigration_time"] <= curr_time.strftime("%Y-%m-%d")]
plt.figure(figsize=(10, 20))
ax = plt.subplot(1, 2, 1, projection=plain_crs)
ax.coastlines(resolution='50m')
ax.gridlines()
ax.set_extent(uk_extent, crs=plain_crs)
ax.scatter(migrant_pool_curr["prev_MSOA_lon"],
migrant_pool_curr["prev_MSOA_lat"],
color='blue', linewidth=2,
marker='o', alpha=0.2,
transform=plain_crs)
plt.title(curr_time)
ax = plt.subplot(1, 2, 2, projection=plain_crs)
ax.coastlines(resolution='50m')
ax.gridlines()
ax.set_extent(uk_extent, crs=plain_crs)
ax.scatter(migrant_pool_curr["MSOA_lon"],
migrant_pool_curr["MSOA_lat"],
color='red', linewidth=2,
marker='o', alpha=0.2,
transform=plain_crs)
plt.title(curr_time)
plt.show()
# go to next time, according to the selected interval_in_days
curr_time = datetime.datetime.strptime(curr_time.strftime("%Y-%m-%d"), "%Y-%m-%d")
curr_time += datetime.timedelta(days=interval_in_days)
uk_extent = [-10, 3, 49, 59]
plain_crs = ccrs.PlateCarree()
cm = plt.cm.get_cmap('nipy_spectral')
plt.figure(figsize=(10, 20))
ax = plt.axes(projection=plain_crs)
ax.coastlines(resolution='50m')
ax.gridlines()
ax.set_extent(uk_extent, crs=plain_crs)
# Color with age, origin
sc = ax.scatter(migrant_pool["prev_MSOA_lon"],
migrant_pool["prev_MSOA_lat"],
c=migrant_pool["age"],
linewidth=2, marker='o',
cmap=cm,
vmin=0, vmax=100,
transform=plain_crs)
cbar = plt.colorbar(sc, fraction=0.03, pad=0.04)
cbar.set_label("Age")
plt.title("Origin", size=24)
# Color with age, destination
plt.figure(figsize=(10, 20))
ax = plt.axes(projection=plain_crs)
ax.coastlines(resolution='50m')
ax.gridlines()
ax.set_extent(uk_extent, crs=plain_crs)
ax.scatter(migrant_pool["MSOA_lon"],
migrant_pool["MSOA_lat"],
c=migrant_pool["age"],
linewidth=2, marker='o',
cmap=cm,
vmin=0, vmax=100,
transform=plain_crs,
)
cbar = plt.colorbar(sc, fraction=0.03, pad=0.04)
cbar.set_label("Age")
plt.title("Destination", size=24)
plt.show()
# Save as above, zoomed in
plain_crs = ccrs.PlateCarree()
cm = plt.cm.get_cmap('nipy_spectral')
dx = dy = 1.5
uk_extent = [-1.55-dx, -1.55+dx, 53.08-dy, 53.08+dy]
plt.figure(figsize=(10, 20))
ax = plt.axes(projection=plain_crs)
ax.coastlines(resolution='50m')
ax.gridlines()
ax.set_extent(uk_extent, crs=plain_crs)
sc = ax.scatter(migrant_pool["prev_MSOA_lon"],
migrant_pool["prev_MSOA_lat"],
c=migrant_pool["age"],
linewidth=2, marker='o',
cmap=cm,
vmin=0, vmax=100,
transform=plain_crs,
)
cbar = plt.colorbar(sc, fraction=0.03, pad=0.04)
cbar.set_label("Age")
plt.title("Origin", size=24)
plt.figure(figsize=(10, 20))
ax = plt.axes(projection=plain_crs)
ax.coastlines(resolution='50m')
ax.gridlines()
ax.set_extent(uk_extent, crs=plain_crs)
ax.scatter(migrant_pool["MSOA_lon"],
migrant_pool["MSOA_lat"],
c=migrant_pool["age"],
linewidth=2, marker='o',
cmap=cm,
vmin=0, vmax=100,
transform=plain_crs,
)
cbar = plt.colorbar(sc, fraction=0.03, pad=0.04)
cbar.set_label("Age")
plt.title("Destination", size=24)
plt.show()
###Output
_____no_output_____ |
Olympics Analysis.ipynb | ###Markdown
DataFrame
###Code
df
df.isnull().sum()
###Output
_____no_output_____
###Markdown
NAME OF CITIES WHERE SUMMER OLYMPICS IS HELD
###Code
c = []
c = df['City'].unique()
c
###Output
_____no_output_____
###Markdown
NUMBER OF CITIES WHERE SUMMER OLYMPICS IS HELD
###Code
n = len(c)
print("The Number of Cities Where Summer Olympics is held is \n", n)
###Output
The Number of Cities Where Summer Olympics is held is
22
###Markdown
Sport which is having most number of Gold Medals so far (Top 5)
###Code
x = df[df['Medal'] == 'Gold']
gold = []
for i in x['Sport'].unique():
gold.append([i, len(x[x['Sport'] == i])])
gold = pd.DataFrame(gold, columns = ['Sport', 'Medals'])
gold = gold.sort_values(by = 'Medals', ascending = False).head()
gold
gold.plot(x = 'Sport', y = 'Medals', kind = 'bar', color = 'gold', figsize = (6,6))
###Output
_____no_output_____
###Markdown
Sport which is having most number of medals so far (Top 5)
###Code
tm = []
for m in df['Sport'].unique():
tm.append([m, len(df[df['Sport'] == m])])
tm = pd.DataFrame(tm, columns = ['Sport', 'Total Medals'])
tm = tm.sort_values(by = 'Total Medals', ascending = False).head()
tm
tm.plot(x = 'Sport', y = 'Total Medals', kind = 'bar', color = 'red', figsize = (6,6))
###Output
_____no_output_____
###Markdown
Players who have won most number of medals (Top 5)
###Code
at = []
for ap in df['Athlete'].unique():
at.append([ap, len(df[df['Athlete'] == ap])])
at = pd.DataFrame(at, columns = ['Player', 'Total Medals'])
at = at.sort_values(by = 'Total Medals', ascending = False).head()
at
at.plot(x = 'Player', y = 'Total Medals', kind = 'bar', color = 'green', figsize = (6,6))
###Output
_____no_output_____
###Markdown
Players who have won most number Gold Medals of medals (Top 5)
###Code
x = df[df['Medal'] == 'Gold']
plgold = []
for i in x['Athlete'].unique():
plgold.append([i, len(x[x['Athlete'] == i])])
plgold = pd.DataFrame(plgold, columns = ['Player', 'Gold Medals'])
plgold = plgold.sort_values(by = 'Gold Medals', ascending = False).head()
plgold
plgold.plot(x = 'Player', y = 'Gold Medals', kind = 'bar', color = 'gold', figsize = (6,6))
###Output
_____no_output_____
###Markdown
The year where India won first Gold Medal in Summer Olympics
###Code
x = df[df['Medal'] == 'Gold']
y = x.loc[x['Country'] == 'IND']
y.iloc[0]
print("The first Gold Medal in Summer Olympics won by India was in the year")
y['Year'].iloc[0]
###Output
The first Gold Medal in Summer Olympics won by India was in the year
###Markdown
Most popular event in terms on number of players (Top 5)
###Code
eve = []
for i in df['Event'].unique():
eve.append([i, len(df[df['Event'] == i])])
eve = pd.DataFrame(eve, columns = ['Event', 'Total Players'])
eve = eve.sort_values(by = 'Total Players', ascending = False).head()
eve
eve.plot(x = 'Event', y = 'Total Players', kind = 'bar', color = 'black', figsize = (6,6))
###Output
_____no_output_____
###Markdown
Sport which is having most female Gold Medalists (Top 5)
###Code
x = df[df['Medal'] == 'Gold']
f = x[x['Gender'] == 'Women']
wgold = []
for i in f['Sport'].unique():
wgold.append([i, len(f[f['Sport'] == i])])
wgold = pd.DataFrame(wgold, columns = ['Sport', 'Female Gold Medalists'])
wgold = wgold.sort_values(by = 'Female Gold Medalists', ascending = False).head()
wgold
wgold.plot(x = 'Sport', y = 'Female Gold Medalists', kind = 'bar', color = 'pink', figsize = (6,6))
###Output
_____no_output_____
###Markdown
DataFrame
###Code
df
df.isnull().sum()
###Output
_____no_output_____
###Markdown
NAME OF CITIES WHERE SUMMER OLYMPICS IS HELD
###Code
c = []
c = df['City'].unique()
c
###Output
_____no_output_____
###Markdown
NUMBER OF CITIES WHERE SUMMER OLYMPICS IS HELD
###Code
n = len(c)
print("The Number of Cities Where Summer Olympics is held is \n", n)
###Output
The Number of Cities Where Summer Olympics is held is
22
###Markdown
Sport which is having most number of Gold Medals so far (Top 5)
###Code
x = df[df['Medal'] == 'Gold']
gold = []
for i in x['Sport'].unique():
gold.append([i, len(x[x['Sport'] == i])])
gold = pd.DataFrame(gold, columns = ['Sport', 'Medals'])
gold = gold.sort_values(by = 'Medals', ascending = False).head()
gold
gold.plot(x = 'Sport', y = 'Medals', kind = 'bar', color = 'gold', figsize = (6,6))
###Output
_____no_output_____
###Markdown
Sport which is having most number of medals so far (Top 5)
###Code
tm = []
for m in df['Sport'].unique():
tm.append([m, len(df[df['Sport'] == m])])
tm = pd.DataFrame(tm, columns = ['Sport', 'Total Medals'])
tm = tm.sort_values(by = 'Total Medals', ascending = False).head()
tm
tm.plot(x = 'Sport', y = 'Total Medals', kind = 'bar', color = 'red', figsize = (6,6))
###Output
_____no_output_____
###Markdown
Players who have won most number of medals (Top 5)
###Code
at = []
for ap in df['Athlete'].unique():
at.append([ap, len(df[df['Athlete'] == ap])])
at = pd.DataFrame(at, columns = ['Player', 'Total Medals'])
at = at.sort_values(by = 'Total Medals', ascending = False).head()
at
at.plot(x = 'Player', y = 'Total Medals', kind = 'bar', color = 'green', figsize = (6,6))
###Output
_____no_output_____
###Markdown
Players who have won most number Gold Medals of medals (Top 5)
###Code
x = df[df['Medal'] == 'Gold']
plgold = []
for i in x['Athlete'].unique():
plgold.append([i, len(x[x['Athlete'] == i])])
plgold = pd.DataFrame(plgold, columns = ['Player', 'Gold Medals'])
plgold = plgold.sort_values(by = 'Gold Medals', ascending = False).head()
plgold
plgold.plot(x = 'Player', y = 'Gold Medals', kind = 'bar', color = 'gold', figsize = (6,6))
###Output
_____no_output_____
###Markdown
The year where India won first Gold Medal in Summer Olympics
###Code
x = df[df['Medal'] == 'Gold']
y = x.loc[x['Country'] == 'IND']
y.iloc[0]
print("The first Gold Medal in Summer Olympics won by India was in the year")
y['Year'].iloc[0]
###Output
The first Gold Medal in Summer Olympics won by India was in the year
###Markdown
Most popular event in terms on number of players (Top 5)
###Code
eve = []
for i in df['Event'].unique():
eve.append([i, len(df[df['Event'] == i])])
eve = pd.DataFrame(eve, columns = ['Event', 'Total Players'])
eve = eve.sort_values(by = 'Total Players', ascending = False).head()
eve
eve.plot(x = 'Event', y = 'Total Players', kind = 'bar', color = 'black', figsize = (6,6))
###Output
_____no_output_____
###Markdown
Sport which is having most female Gold Medalists (Top 5)
###Code
x = df[df['Medal'] == 'Gold']
f = x[x['Gender'] == 'Women']
wgold = []
for i in f['Sport'].unique():
wgold.append([i, len(f[f['Sport'] == i])])
wgold = pd.DataFrame(wgold, columns = ['Sport', 'Female Gold Medalists'])
wgold = wgold.sort_values(by = 'Female Gold Medalists', ascending = False).head()
wgold
wgold.plot(x = 'Sport', y = 'Female Gold Medalists', kind = 'bar', color = 'pink', figsize = (6,6))
###Output
_____no_output_____
###Markdown
DataFrame
###Code
df
df.isnull().sum()
###Output
_____no_output_____
###Markdown
NAME OF CITIES WHERE SUMMER OLYMPICS IS HELD
###Code
c = []
c = df['City'].unique()
c
###Output
_____no_output_____
###Markdown
NUMBER OF CITIES WHERE SUMMER OLYMPICS IS HELD
###Code
n = len(c)
print("The Number of Cities Where Summer Olympics is held is \n", n)
###Output
The Number of Cities Where Summer Olympics is held is
22
###Markdown
Sport which is having most number of Gold Medals so far (Top 5)
###Code
x = df[df['Medal'] == 'Gold']
gold = []
for i in x['Sport'].unique():
gold.append([i, len(x[x['Sport'] == i])])
gold = pd.DataFrame(gold, columns = ['Sport', 'Medals'])
gold = gold.sort_values(by = 'Medals', ascending = False).head()
gold
gold.plot(x = 'Sport', y = 'Medals', kind = 'bar', color = 'gold', figsize = (6,6))
###Output
_____no_output_____
###Markdown
Sport which is having most number of medals so far (Top 5)
###Code
tm = []
for m in df['Sport'].unique():
tm.append([m, len(df[df['Sport'] == m])])
tm = pd.DataFrame(tm, columns = ['Sport', 'Total Medals'])
tm = tm.sort_values(by = 'Total Medals', ascending = False).head()
tm
tm.plot(x = 'Sport', y = 'Total Medals', kind = 'bar', color = 'red', figsize = (6,6))
###Output
_____no_output_____
###Markdown
Players who have won most number of medals (Top 5)
###Code
at = []
for ap in df['Athlete'].unique():
at.append([ap, len(df[df['Athlete'] == ap])])
at = pd.DataFrame(at, columns = ['Player', 'Total Medals'])
at = at.sort_values(by = 'Total Medals', ascending = False).head()
at
at.plot(x = 'Player', y = 'Total Medals', kind = 'bar', color = 'green', figsize = (6,6))
###Output
_____no_output_____
###Markdown
Players who have won most number Gold Medals of medals (Top 5)
###Code
x = df[df['Medal'] == 'Gold']
plgold = []
for i in x['Athlete'].unique():
plgold.append([i, len(x[x['Athlete'] == i])])
plgold = pd.DataFrame(plgold, columns = ['Player', 'Gold Medals'])
plgold = plgold.sort_values(by = 'Gold Medals', ascending = False).head()
plgold
plgold.plot(x = 'Player', y = 'Gold Medals', kind = 'bar', color = 'gold', figsize = (6,6))
###Output
_____no_output_____
###Markdown
The year where India won first Gold Medal in Summer Olympics
###Code
x = df[df['Medal'] == 'Gold']
y = x.loc[x['Country'] == 'IND']
y.iloc[0]
print("The first Gold Medal in Summer Olympics won by India was in the year")
y['Year'].iloc[0]
###Output
The first Gold Medal in Summer Olympics won by India was in the year
###Markdown
Most popular event in terms on number of players (Top 5)
###Code
eve = []
for i in df['Event'].unique():
eve.append([i, len(df[df['Event'] == i])])
eve = pd.DataFrame(eve, columns = ['Event', 'Total Players'])
eve = eve.sort_values(by = 'Total Players', ascending = False).head()
eve
eve.plot(x = 'Event', y = 'Total Players', kind = 'bar', color = 'black', figsize = (6,6))
###Output
_____no_output_____
###Markdown
Sport which is having most female Gold Medalists (Top 5)
###Code
x = df[df['Medal'] == 'Gold']
f = x[x['Gender'] == 'Women']
wgold = []
for i in f['Sport'].unique():
wgold.append([i, len(f[f['Sport'] == i])])
wgold = pd.DataFrame(wgold, columns = ['Sport', 'Female Gold Medalists'])
wgold = wgold.sort_values(by = 'Female Gold Medalists', ascending = False).head()
wgold
wgold.plot(x = 'Sport', y = 'Female Gold Medalists', kind = 'bar', color = 'pink', figsize = (6,6))
###Output
_____no_output_____
###Markdown
DataFrame
###Code
df
df.isnull().sum()
###Output
_____no_output_____
###Markdown
NAME OF CITIES WHERE SUMMER OLYMPICS IS HELD
###Code
c = []
c = df['City'].unique()
c
###Output
_____no_output_____
###Markdown
NUMBER OF CITIES WHERE SUMMER OLYMPICS IS HELD
###Code
n = len(c)
print("The Number of Cities Where Summer Olympics is held is \n", n)
###Output
The Number of Cities Where Summer Olympics is held is
22
###Markdown
Sport which is having most number of Gold Medals so far (Top 5)
###Code
x = df[df['Medal'] == 'Gold']
gold = []
for i in x['Sport'].unique():
gold.append([i, len(x[x['Sport'] == i])])
gold = pd.DataFrame(gold, columns = ['Sport', 'Medals'])
gold = gold.sort_values(by = 'Medals', ascending = False).head()
gold
gold.plot(x = 'Sport', y = 'Medals', kind = 'bar', color = 'gold', figsize = (6,6))
###Output
_____no_output_____
###Markdown
Sport which is having most number of medals so far (Top 5)
###Code
tm = []
for m in df['Sport'].unique():
tm.append([m, len(df[df['Sport'] == m])])
tm = pd.DataFrame(tm, columns = ['Sport', 'Total Medals'])
tm = tm.sort_values(by = 'Total Medals', ascending = False).head()
tm
tm.plot(x = 'Sport', y = 'Total Medals', kind = 'bar', color = 'red', figsize = (6,6))
###Output
_____no_output_____
###Markdown
Players who have won most number of medals (Top 5)
###Code
at = []
for ap in df['Athlete'].unique():
at.append([ap, len(df[df['Athlete'] == ap])])
at = pd.DataFrame(at, columns = ['Player', 'Total Medals'])
at = at.sort_values(by = 'Total Medals', ascending = False).head()
at
at.plot(x = 'Player', y = 'Total Medals', kind = 'bar', color = 'green', figsize = (6,6))
###Output
_____no_output_____
###Markdown
Players who have won most number Gold Medals of medals (Top 5)
###Code
x = df[df['Medal'] == 'Gold']
plgold = []
for i in x['Athlete'].unique():
plgold.append([i, len(x[x['Athlete'] == i])])
plgold = pd.DataFrame(plgold, columns = ['Player', 'Gold Medals'])
plgold = plgold.sort_values(by = 'Gold Medals', ascending = False).head()
plgold
plgold.plot(x = 'Player', y = 'Gold Medals', kind = 'bar', color = 'gold', figsize = (6,6))
###Output
_____no_output_____
###Markdown
The year where India won first Gold Medal in Summer Olympics
###Code
x = df[df['Medal'] == 'Gold']
y = x.loc[x['Country'] == 'IND']
y.iloc[0]
print("The first Gold Medal in Summer Olympics won by India was in the year")
y['Year'].iloc[0]
###Output
The first Gold Medal in Summer Olympics won by India was in the year
###Markdown
Most popular event in terms on number of players (Top 5)
###Code
eve = []
for i in df['Event'].unique():
eve.append([i, len(df[df['Event'] == i])])
eve = pd.DataFrame(eve, columns = ['Event', 'Total Players'])
eve = eve.sort_values(by = 'Total Players', ascending = False).head()
eve
eve.plot(x = 'Event', y = 'Total Players', kind = 'bar', color = 'black', figsize = (6,6))
###Output
_____no_output_____
###Markdown
Sport which is having most female Gold Medalists (Top 5)
###Code
x = df[df['Medal'] == 'Gold']
f = x[x['Gender'] == 'Women']
wgold = []
for i in f['Sport'].unique():
wgold.append([i, len(f[f['Sport'] == i])])
wgold = pd.DataFrame(wgold, columns = ['Sport', 'Female Gold Medalists'])
wgold = wgold.sort_values(by = 'Female Gold Medalists', ascending = False).head()
wgold
wgold.plot(x = 'Sport', y = 'Female Gold Medalists', kind = 'bar', color = 'pink', figsize = (6,6))
###Output
_____no_output_____ |
DMDW_LAB_ASSIGNMENT_4.ipynb | ###Markdown
**Download a dataset from Kaggle and plot all the graphs and charts on the downloaded dataset.**
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
url="https://raw.githubusercontent.com/Akash2oc98/18cse037-gietu_DMDW_lab-work/main/vgsales.csv"
df=pd.read_csv(url,sep=',')
df
df.head()
df.dropna(axis=0,inplace=True)
df.shape
df.head()
plt.scatter(df['Year'],df['Global_Sales'],c='green')
plt.xlabel('Years')
plt.ylabel('Global_Sales')
plt.show()
plt.figure(figsize=(10,10))
plt.hist(df['Genre'],color = 'orange',edgecolor = 'white',bins = 12)
plt.figure(figsize=(10,10))
plt.hist(df['Genre'],color = 'red',edgecolor = 'white',bins = 12)
plt.title('Histograms of Genres')
plt.xlabel('Genres')
plt.ylabel('Frequency')
plt.show()
counts = [969, 120, 12, 97, 279]
Platform = ('Misc', 'GB', 'Wii','PS2','PS3')
index = np.arange(len(Platform))
plt.bar(index, counts, color=['red', 'blue', 'cyan','skyblue','pink'])
plt.title('Bar plot of Platforms')
plt.xlabel('Platform Types')
plt.ylabel('Frequency')
plt.xticks(index, Platform, rotation = 90)
plt.show()
sns.set(style="darkgrid")
sns.regplot(x=df['Global_Sales'],y=df['Year'])
sns.set(style="darkgrid")
sns.regplot(x=df['Global_Sales'],y=df['Year'], marker="*",fit_reg=False)
sns.distplot(df['Year'])
sns.distplot(df['Year'],kde=False)
sns.distplot(df['Year'],kde=False,bins=5)
url="https://raw.githubusercontent.com/Akash2oc98/18cse037-gietu_DMDW_lab-work/main/world_alcohol.csv"
data=pd.read_csv(url,sep=',')
data
sns.countplot(x="Beverage Types", data=data)
sns.countplot(x="Beverage Types", data=data, hue = "WHO region")
sns.boxplot(y=data["Display Value"])
sns.boxplot(x=data['Year'], y=data["Display Value"])
plt.figure(figsize=(10,10))
sns.boxplot(x=data['Year'], y=data["Display Value"], hue = "WHO region",data=data)
f,(ax_box, ax_hist)=plt.subplots(2, gridspec_kw={"height_ratios":(.20,.80)})
f,(ax_box, ax_hist)=plt.subplots(2, gridspec_kw={"height_ratios":(.20,.80)})
sns.boxplot(data["Display Value"],ax=ax_box)
sns.distplot(data["Display Value"],ax=ax_hist,kde=False)
sns.pairplot(data, kind="countplot",hue="Beverage Types")
plt.show()
###Output
_____no_output_____
###Markdown
**Download a dataset from Kaggle and plot all the graphs and charts on the downloaded dataset.**
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
url="https://raw.githubusercontent.com/Akash2oc98/18cse037-gietu_DMDW_lab-work/main/vgsales.csv"
df=pd.read_csv(url,sep=',')
df
df.head()
df.dropna(axis=0,inplace=True)
df.shape
df.head()
plt.scatter(df['Year'],df['Global_Sales'],c='green')
plt.xlabel('Years')
plt.ylabel('Global_Sales')
plt.show()
plt.figure(figsize=(10,10))
plt.hist(df['Genre'],color = 'orange',edgecolor = 'white',bins = 12)
plt.figure(figsize=(10,10))
plt.hist(df['Genre'],color = 'red',edgecolor = 'white',bins = 12)
plt.title('Histograms of Genres')
plt.xlabel('Genres')
plt.ylabel('Frequency')
plt.show()
counts = [969, 120, 12, 97, 279]
Platform = ('Misc', 'GB', 'Wii','PS2','PS3')
index = np.arange(len(Platform))
plt.bar(index, counts, color=['red', 'blue', 'cyan','skyblue','pink'])
plt.title('Bar plot of Platforms')
plt.xlabel('Platform Types')
plt.ylabel('Frequency')
plt.xticks(index, Platform, rotation = 90)
plt.show()
sns.set(style="darkgrid")
sns.regplot(x=df['Global_Sales'],y=df['Year'])
sns.set(style="darkgrid")
sns.regplot(x=df['Global_Sales'],y=df['Year'], marker="*",fit_reg=False)
sns.distplot(df['Year'])
sns.distplot(df['Year'],kde=False)
sns.distplot(df['Year'],kde=False,bins=5)
url="https://raw.githubusercontent.com/Akash2oc98/18cse037-gietu_DMDW_lab-work/main/world_alcohol.csv"
data=pd.read_csv(url,sep=',')
data
sns.countplot(x="Beverage Types", data=data)
sns.countplot(x="Beverage Types", data=data, hue = "WHO region")
sns.boxplot(y=data["Display Value"])
sns.boxplot(x=data['Year'], y=data["Display Value"])
plt.figure(figsize=(10,10))
sns.boxplot(x=data['Year'], y=data["Display Value"], hue = "WHO region",data=data)
f,(ax_box, ax_hist)=plt.subplots(2, gridspec_kw={"height_ratios":(.20,.80)})
f,(ax_box, ax_hist)=plt.subplots(2, gridspec_kw={"height_ratios":(.20,.80)})
sns.boxplot(data["Display Value"],ax=ax_box)
sns.distplot(data["Display Value"],ax=ax_hist,kde=False)
sns.pairplot(data, kind="countplot",hue="Beverage Types")
plt.show()
###Output
_____no_output_____ |
Lookup table calculation/LookupTableInitialValuesCalc.ipynb | ###Markdown
[The Two Piece Normal Distribution](https://quantgirl.blog/two-piece-normal/) is used to create initial distribution of values for the lookup table.
Sigma values are hand-picked based on [Avital Pekker's exellent work](https://avital.ca/notes/a-closer-look-at-apples-breathing-light) in which he explains his approach.
###Code
from twopiece.scale import *
from twopiece.shape import *
from twopiece.double import *
from matplotlib import pyplot as plt
from numpy import *
import numpy as np
import matplotlib.pyplot as plt
loc=1.6
sigma1=0.5
sigma2=0.75
shape=1
dist = tpnorm(loc=1.6, sigma1=sigma1, sigma2=sigma2)
# dist2 = tpstudent(loc=1.6, sigma1=sigma1, sigma2=sigma2, shape=1)
x = arange(0,5,0.005)
y=dist.pdf(x)
plt.plot(x,y)
np.savetxt("x.csv", x, delimiter=",")
np.savetxt("y.csv", y, delimiter=",")
z = np.array([0.004, 0.004, 0.004, 0.005, 0.005, 0.005, 0.006, 0.006, 0.006, 0.007, 0.007, 0.008, 0.008, 0.008, 0.009, 0.01, 0.01, 0.011, 0.011, 0.012, 0.013, 0.013, 0.014, 0.015, 0.016, 0.017, 0.018, 0.019,
0.02, 0.021, 0.022, 0.023, 0.024, 0.025, 0.027, 0.028, 0.029, 0.031, 0.033, 0.034, 0.036, 0.038, 0.039, 0.041, 0.043, 0.045, 0.047, 0.05, 0.052, 0.054, 0.057, 0.059, 0.062, 0.065, 0.067, 0.07, 0.073,
0.076, 0.08, 0.083, 0.086, 0.09, 0.094, 0.097, 0.101, 0.105, 0.109, 0.113, 0.117, 0.122, 0.126, 0.131, 0.136, 0.14, 0.145, 0.15, 0.156, 0.161, 0.166, 0.172, 0.177, 0.183, 0.189, 0.195, 0.201, 0.207,
0.213, 0.22, 0.226, 0.233, 0.24, 0.246, 0.253, 0.26, 0.267, 0.274, 0.281, 0.289, 0.296, 0.303, 0.311, 0.318, 0.326, 0.333, 0.341, 0.349, 0.356, 0.364, 0.372, 0.379, 0.387, 0.395, 0.403, 0.41, 0.418,
0.426, 0.433, 0.441, 0.449, 0.456, 0.464, 0.471, 0.478, 0.485, 0.493, 0.5, 0.507, 0.513, 0.52, 0.527, 0.533, 0.539, 0.546, 0.552, 0.558, 0.563, 0.569, 0.574, 0.579, 0.584, 0.589, 0.594, 0.598, 0.602,
0.606, 0.61, 0.614, 0.617, 0.62, 0.623, 0.626, 0.628, 0.63, 0.632, 0.634, 0.635, 0.636, 0.637, 0.638, 0.638, 0.638, 0.638, 0.638, 0.638, 0.637, 0.637, 0.636, 0.636, 0.635, 0.634, 0.633, 0.631, 0.63, 0.629,
0.627, 0.626, 0.624, 0.622, 0.62, 0.618, 0.616, 0.614, 0.611, 0.609, 0.606, 0.604, 0.601, 0.598, 0.595, 0.592, 0.589, 0.586, 0.583, 0.579, 0.576, 0.572, 0.569, 0.565, 0.561, 0.558, 0.554, 0.55, 0.546, 0.542,
0.537, 0.533, 0.529, 0.525, 0.52, 0.516, 0.511, 0.507, 0.502, 0.497, 0.493, 0.488, 0.483, 0.478, 0.473, 0.468, 0.464, 0.459, 0.454, 0.449, 0.444, 0.438, 0.433, 0.428, 0.423, 0.418, 0.413, 0.408, 0.403, 0.397,
0.392, 0.387, 0.382, 0.377, 0.372, 0.367, 0.361, 0.356, 0.351, 0.346, 0.341, 0.336, 0.331, 0.326, 0.321, 0.316, 0.311, 0.306, 0.301, 0.296, 0.291, 0.286, 0.281, 0.277, 0.272, 0.267, 0.262, 0.258, 0.253, 0.249,
0.244, 0.24, 0.235, 0.231, 0.226, 0.222, 0.218, 0.213, 0.209, 0.205, 0.201, 0.197, 0.193, 0.189, 0.185, 0.181, 0.177, 0.174, 0.17, 0.166, 0.163, 0.159, 0.156, 0.152, 0.149, 0.145, 0.142, 0.139, 0.136, 0.132, 0.129,
0.126, 0.123, 0.12, 0.117, 0.115, 0.112, 0.109, 0.106, 0.104, 0.101, 0.098, 0.096, 0.094, 0.091, 0.089, 0.086, 0.084, 0.082, 0.08, 0.078, 0.075, 0.073, 0.071, 0.069, 0.067, 0.066, 0.064, 0.062, 0.06, 0.058, 0.057, 0.055,
0.054, 0.052, 0.05, 0.049, 0.047, 0.046, 0.045, 0.043, 0.042, 0.041, 0.039, 0.038, 0.037, 0.036, 0.035, 0.034, 0.033, 0.031, 0.03, 0.029, 0.029, 0.028, 0.027, 0.026, 0.025, 0.024, 0.023, 0.022, 0.022, 0.021, 0.02, 0.02,
0.019, 0.018, 0.018, 0.017, 0.016, 0.016, 0.015, 0.015, 0.014, 0.014, 0.013, 0.013, 0.012, 0.012, 0.011, 0.011, 0.01, 0.01, 0.01, 0.009, 0.009, 0.009, 0.008, 0.008, 0.008, 0.007, 0.007, 0.007, 0.007, 0.006, 0.006, 0.006,
0.006, 0.005, 0.005, 0.005, 0.005, 0.005, 0.004, 0.004, 0.004, 0.004, 0.004, 0.004, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.001, 0.001,
0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
])
z
###Output
_____no_output_____ |
_old_/docker/all/work/connector-examples/Redis.ipynb | ###Markdown
Example to Read / Write to Redis with SparkDocumentation: https://github.com/RedisLabs/spark-redis/NOTE: Spark dataframe integration is limited to Redis hashes only. No other data structures are supported with Spark dataframes.
###Code
import pyspark
from pyspark.sql import SparkSession
# REDIS CONFIGURATION
redis_host = "redis"
redis_port = "6379"
# Spark init
spark = SparkSession \
.builder \
.master("local") \
.appName('jupyter-pyspark') \
.config("spark.redis.host", redis_host)\
.config("spark.redis.port", redis_port)\
.config("spark.jars.packages","com.redislabs:spark-redis_2.12:3.0.0")\
.getOrCreate()
sc = spark.sparkContext
sc.setLogLevel("ERROR")
# read local data
df = spark.read.option("multiline","true").json("/home/jovyan/datasets/json-samples/stocks.json")
df.toPandas()
# Write to back to redis as a hash under the following key stocks
df.write.format("org.apache.spark.sql.redis")\
.mode("overwrite")\
.option("table", "stocks")\
.option("key.column","symbol")\
.save()
# read back from Redis!
df1 = spark.read.format("org.apache.spark.sql.redis")\
.option("table", "stocks")\
.option("key.column", "symbol")\
.load()
df1.toPandas()
###Output
_____no_output_____ |
chapter1-2.ipynb | ###Markdown
代码说明:(1)第2,3行:导入Pandas且指定别名为pd。导入pandas模块下的序列和数据框。(2)第4行:利用Pandas的函数Series()生成一个包含9个元素(取值为1至9)的序列,且指定各元素的索引名依次为ID1,ID2等。后续可通过索引访问相应元素。(3)第5行:序列的.values属性中存储着各元素的元素值。(4)第6行:序列的.index属性中存储着各元素的索引。(5)第7行:利用索引号(从0开始)访问指定元素。应以列表形式(如[0,2])指定多个索引号。(6)第8行:利用索引名访问指定元素。索引名应用单引号括起来。应以列表形式(如[‘ID1’,’ID3’])指定多个索引名。(7)第9行:利用Python运算符in,判断是否存在某个索引名。若存在判断结果为True(真),否则为False(假)。True和False是Python的布尔型变量的仅有的取值。
###Code
import pandas as pd
from pandas import Series,DataFrame
data=pd.read_excel('北京市空气质量数据.xlsx')
print('date的类型:{0}'.format(type(data)))
print('数据框的行索引:{0}'.format(data.index))
print('数据框的列名:{0}'.format(data.columns))
print('访问AQI和PM2.5所有值:\n{0}'.format(data[['AQI','PM2.5']]))
print('访问第2至3行的AQI和PM2.5:\n{0}'.format(data.loc[1:2,['AQI','PM2.5']]))
print('访问索引1至索引2的第2和4列:\n{0}'.format(data.iloc[1:3,[1,3]]))
data.info()
###Output
date的类型:<class 'pandas.core.frame.DataFrame'>
数据框的行索引:RangeIndex(start=0, stop=2155, step=1)
数据框的列名:Index(['日期', 'AQI', '质量等级', 'PM2.5', 'PM10', 'SO2', 'CO', 'NO2', 'O3'], dtype='object')
访问AQI和PM2.5所有值:
AQI PM2.5
0 81 45
1 145 111
2 74 47
3 149 114
4 119 91
... ... ...
2150 183 138
2151 175 132
2152 30 7
2153 40 13
2154 73 38
[2155 rows x 2 columns]
访问第2至3行的AQI和PM2.5:
AQI PM2.5
1 145 111
2 74 47
访问索引1至索引2的第2和4列:
AQI PM2.5
1 145 111
2 74 47
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2155 entries, 0 to 2154
Data columns (total 9 columns):
日期 2155 non-null datetime64[ns]
AQI 2155 non-null int64
质量等级 2155 non-null object
PM2.5 2155 non-null int64
PM10 2155 non-null int64
SO2 2155 non-null int64
CO 2155 non-null float64
NO2 2155 non-null int64
O3 2155 non-null int64
dtypes: datetime64[ns](1), float64(1), int64(6), object(1)
memory usage: 151.6+ KB
###Markdown
代码说明:(1)第3行:利用Pandas函数read_excel()将一个Excel文件(北京市空气质量数据.xlsx)读入到数据框中。(2)第4行:利用Python函数type()浏览对象data的类型,结果显示为数据框。(3)第5,6行:数据框的.index和.columns属性中存储着数据框的行索引和列索引名。这里,行索引默认取值:0至样本量N-1。列索引名默认为数据文件中第一行的变量名。(4)第7行:利用列索引名访问指定变量。多个列索引名应以列表形式放在方括号中([‘AQI’,’PM2.5’])。(5)第8行:利用数据框的.loc属性访问指定行索引和变量名上的元素。注意:数据框对应二维表格,应给两个索引。(6)第9行:利用数据框的.iloc属性访问指定行索引和列索引号上的元素。注意:使用行索引时冒号:后的行不包括在内(7)第10行:利用数据框的info()方法显示数据框的行索引、列索引以及数据类型等信息。
###Code
import numpy as np
import pandas as pd
from pandas import Series,DataFrame
df1=DataFrame({'key':['a','d','c','a','b','d','c'],'var1':range(7)})
df2=DataFrame({'key':['a','b','c','c'],'var2':[0,1,2,2]})
df=pd.merge(df1,df2,on='key',how='outer')
df.iloc[0,2]=np.NaN
df.iloc[5,1]=np.NaN
print('合并后的数据:\n{0}'.format(df))
df=df.drop_duplicates()
print('删除重复数据行后的数据:\n{0}'.format(df))
print('判断是否为缺失值:\n{0}'.format(df.isnull()))
print('判断是否不为缺失值:\n{0}'.format(df.notnull()))
print('删除缺失值后的数据:\n{0}'.format(df.dropna()))
fill_value=df[['var1','var2']].apply(lambda x:x.mean())
print('以均值替换缺失值:\n{0}'.format(df.fillna(fill_value)))
###Output
合并后的数据:
key var1 var2
0 a 0.0 NaN
1 a 3.0 0.0
2 d 1.0 NaN
3 d 5.0 NaN
4 c 2.0 2.0
5 c NaN 2.0
6 c 6.0 2.0
7 c 6.0 2.0
8 b 4.0 1.0
删除重复数据行后的数据:
key var1 var2
0 a 0.0 NaN
1 a 3.0 0.0
2 d 1.0 NaN
3 d 5.0 NaN
4 c 2.0 2.0
5 c NaN 2.0
6 c 6.0 2.0
8 b 4.0 1.0
判断是否为缺失值:
key var1 var2
0 False False True
1 False False False
2 False False True
3 False False True
4 False False False
5 False True False
6 False False False
8 False False False
判断是否不为缺失值:
key var1 var2
0 True True False
1 True True True
2 True True False
3 True True False
4 True True True
5 True False True
6 True True True
8 True True True
删除缺失值后的数据:
key var1 var2
1 a 3.0 0.0
4 c 2.0 2.0
6 c 6.0 2.0
8 b 4.0 1.0
以均值替换缺失值:
key var1 var2
0 a 0.0 1.4
1 a 3.0 0.0
2 d 1.0 1.4
3 d 5.0 1.4
4 c 2.0 2.0
5 c 3.0 2.0
6 c 6.0 2.0
8 b 4.0 1.0
###Markdown
代码说明:(1)第4,5行:基于Python字典建立数据框。(2)第6行:利用Pandas函数merge()将两个数据框依指定关键字做横向合并,生成一个新数据框。(3)第7,8行:人为指定某样本观测的某变量值为NaN。(4)第10行:利用Pandas函数drop_duplicates()剔除数据框中在全部变量上均重复取值的样本观测。(5)第12,13行:利用数据框的.isnull()和.notnull()方法,对数据框中的每个元素判断其是否为NaN或不是NaN,结果为True或False。(6)第14行:利用数据框.dropna()方法剔除取NaN的样本观测。(7)第15行:利用数据框.apply()方法以及匿名函数计算各个变量的均值,并存储在名为fill_value的序列中。(8)第16行:利用数据框的.fillna()方法,将所有NaN替换为指定值(这里为fill_value)。
###Code
import numpy as np
import pandas as pd
from pandas import Series,DataFrame
data=pd.read_excel('北京市空气质量数据.xlsx')
data=data.replace(0,np.NaN)
data['年']=data['日期'].apply(lambda x:x.year)
month=data['日期'].apply(lambda x:x.month)
quarter_month={'1':'一季度','2':'一季度','3':'一季度',
'4':'二季度','5':'二季度','6':'二季度',
'7':'三季度','8':'三季度','9':'三季度',
'10':'四季度','11':'四季度','12':'四季度'}
data['季度']=month.map(lambda x:quarter_month[str(x)])
bins=[0,50,100,150,200,300,1000]
data['等级']=pd.cut(data['AQI'],bins,labels=['一级优','二级良','三级轻度污染','四级中度污染','五级重度污染','六级严重污染'])
print('对AQI的分组结果:\n{0}'.format(data[['日期','AQI','等级','季度']]))
from collections import Iterable
isinstance(month,Iterable) #判断month是否为可迭代的对象
###Output
D:\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
"""Entry point for launching an IPython kernel.
###Markdown
代码说明:(1)第6行:利用数据框函数replace()将数据框中的0(表示无监测结果)替换为缺失值NaN。(2)第7,8行:利用.apply()方法以及匿名函数,基于“日期”变量得到每个样本观测的年份和月份。(3)第9行:建立一个关于月份和季度的字典quarter_month。(4)第10行:利用Python函数map(),依据字典quarter_month,将序列month中的1,2,3等月份映射(对应)到相应的季度上。(5)第14行:生成一个后续用于对AQI分组的列表bins。它描述了AQI和空气质量等级的数值对应关系。(6)第15行:利用Pandas的cut()方法对AQI进行分组。
###Code
print('各季度AQI和PM2.5的均值:\n{0}'.format(data.loc[:,['AQI','PM2.5']].groupby(data['季度']).mean()))
print('各季度AQI和PM2.5的描述统计量:\n',data.groupby(data['季度'])['AQI','PM2.5'].apply(lambda x:x.describe()))
def top(df,n=10,column='AQI'):
return df.sort_values(by=column,ascending=False)[:n]
print('空气质量最差的5天:\n',top(data,n=5)[['日期','AQI','PM2.5','等级']])
print('各季度空气质量最差的3天:\n',data.groupby(data['季度']).apply(lambda x:top(x,n=3)[['日期','AQI','PM2.5','等级']]))
print('各季度空气质量情况:\n',pd.crosstab(data['等级'],data['季度'],margins=True,margins_name='总计',normalize=False))
###Output
各季度AQI和PM2.5的均值:
AQI PM2.5
季度
一季度 109.327778 77.225926
三季度 98.911071 49.528131
二季度 109.369004 55.149723
四季度 109.612403 77.195736
各季度AQI和PM2.5的描述统计量:
AQI PM2.5
季度
一季度 count 540.000000 540.000000
mean 109.327778 77.225926
std 80.405408 73.133857
min 26.000000 4.000000
25% 48.000000 24.000000
50% 80.000000 53.000000
75% 145.000000 109.250000
max 470.000000 454.000000
三季度 count 551.000000 551.000000
mean 98.911071 49.528131
std 45.484516 35.394897
min 28.000000 3.000000
25% 60.000000 23.000000
50% 95.000000 41.000000
75% 130.500000 67.000000
max 252.000000 202.000000
二季度 count 542.000000 541.000000
mean 109.369004 55.149723
std 49.608042 35.918345
min 35.000000 5.000000
25% 71.000000 27.000000
50% 99.000000 47.000000
75% 140.750000 73.000000
max 500.000000 229.000000
四季度 count 516.000000 516.000000
mean 109.612403 77.195736
std 84.192134 76.651794
min 21.000000 4.000000
25% 55.000000 25.000000
50% 78.000000 51.000000
75% 137.250000 101.500000
max 485.000000 477.000000
空气质量最差的5天:
日期 AQI PM2.5 等级
1218 2017-05-04 500.0 NaN 六级严重污染
723 2015-12-25 485.0 477.0 六级严重污染
699 2015-12-01 476.0 464.0 六级严重污染
1095 2017-01-01 470.0 454.0 六级严重污染
698 2015-11-30 450.0 343.0 六级严重污染
各季度空气质量最差的3天:
日期 AQI PM2.5 等级
季度
一季度 1095 2017-01-01 470.0 454.0 六级严重污染
45 2014-02-15 428.0 393.0 六级严重污染
55 2014-02-25 403.0 354.0 六级严重污染
三季度 186 2014-07-06 252.0 202.0 五级重度污染
211 2014-07-31 245.0 195.0 五级重度污染
183 2014-07-03 240.0 190.0 五级重度污染
二季度 1218 2017-05-04 500.0 NaN 六级严重污染
1219 2017-05-05 342.0 181.0 六级严重污染
103 2014-04-14 279.0 229.0 五级重度污染
四季度 723 2015-12-25 485.0 477.0 六级严重污染
699 2015-12-01 476.0 464.0 六级严重污染
698 2015-11-30 450.0 343.0 六级严重污染
各季度空气质量情况:
季度 一季度 三季度 二季度 四季度 总计
等级
一级优 145 96 38 108 387
二级良 170 209 240 230 849
三级轻度污染 99 164 152 64 479
四级中度污染 57 72 96 33 258
五级重度污染 48 10 14 58 130
六级严重污染 21 0 2 23 46
总计 540 551 542 516 2149
###Markdown
代码说明:(1)第1行:利用数据框的groupby()方法,计算各季度AQI和PM2.5的平均值。(2)第2行:计算几个季度AQI和PM2.5的基本描述统计量(均值,标准差,最小值,四分位数,最大值)。(3)第4,5行:定义了一个名为top的用户自定义函数:对给定数据框,按指定列(默认AQI列)值的降序排序,返回排在前n(默认10)条数据。(4)第6行:调用用户自定义函数top,对data数据框中,按AQI值的降序排序并返回前5条数据,即AQI最高的5天的数据。(5)第7行:首先对数据按季度分组,依次对分组数据调用用户自定义函数top,得到各季度AQI最高的3天数据。(6)第8行:利用Pandas函数crosstab()对数据按季度和空气质量等级交叉分组,并给出各个组的样本量。
###Code
pd.get_dummies(data['等级'])
data.join(pd.get_dummies(data['等级']))
###Output
_____no_output_____
###Markdown
代码说明:(1)第1行:利用Pandas的get_dummies得到分类型变量“等级”的哑变量。(2)第2行:利用数据框的join()方法,将原始数据和哑变量数据,按行索引进行横向合并。
###Code
np.random.seed(123)
sampler=np.random.randint(0,len(data),10)
print(sampler)
sampler=np.random.permutation(len(data))[:10]
print(sampler)
data.take(sampler)
data.loc[data['质量等级']=='优',:]
###Output
[1346 1122 1766 2154 1147 1593 1761 96 47 73]
[1883 326 43 1627 1750 1440 993 1469 1892 865]
|
Disciplina de Deep Learning/E03_Leitura_de_um_Dataset.ipynb | ###Markdown
###Code
#@title
%%capture
!rm *
!gdown --id '1BLyWu9zDytBTGR6vLL-UTSAZhpwRQD6b'
!gdown --id '1a5jY17w-SINzRRdq2iH6SFEQiS_ZaAKj'
!gdown --id '1RmUz5LqBQbfn02hvPdwP4ICpWVfa_bTV'
!gdown --id '1ZkG09pQDz29mOFR5VTMPEirvwClVNj_F'
!gdown --id '14zHXla8960NSNjqzf2j1ip8pFYqGyPRJ'
#!gdown --id '1vWtNHG3ehZyjmWUlP85i9z4u7G7Ror7L'
!gdown --id '1KNQKU68Y_XtFN2AS0guGpKDmiMM3YNij'
!gdown --id '1R9LZJw-lvngWOVSInKNhKNkvTPpuS0KI'
#!gdown --id '1UaAWEmE6Igp7P9C5EQJby_EykEiI_fZ1'
!gdown --id '11vJUJtgumou5hkM1a8TribAXkHDlSpz_'
!gdown --id '1jNFpdEtEbtcMgd51K6pjEvjB0hCFEI_E'
!gdown --id '1dNZ8LvkczzE1oOdh_S7cu3Z5eG4T4EqG'
!pip install git+https://github.com/grading/gradememaybe.git
from IPython.display import YouTubeVideo
from gofer import ok
###Output
_____no_output_____
###Markdown
1 Upload do ArquivoUse o espaço abaixo para fazer o upload do arquivo para o ambiente do Google Colab. Para isso você pode usar o comando `gdown` (lembre-se de colocar o sinal de exclamação `!` para chamar comandos na linha de comando, fora do Python).
###Code
# Seu código aqui
#from google.colab import files
#import io
#uploaded = files.upload()
#f = io.BytesIO(uploaded['Iris.csv'])
import os
os.path.isfile('/content/Iris.csv')
ok.check('e03_1.py')
###Output
_____no_output_____
###Markdown
2 Leitura do ArquivoEscreva um laço que leia todas as linhas do arquivo, criando uma lista numa variável de nome 'd0', onde cada elemento da lista é uma string que corresponde a uma linha do arquivo.
###Code
# Seu código aqui
f = open('/content/Iris.csv', 'r')
linhas = f.readlines()
d0 = []
for linha in linhas:
d0.append(linha)
ok.check('e03_2.py')
###Output
_____no_output_____
###Markdown
3 Remoção de Cabeçalho e Contagem de LinhasUtilizando a mesma variável `d0`, agora remova o cabeçalho (que deve ser o primeiro elemento da lista), e conte as linhas de dados. Grave o total de linhas numa variável de nome `total`.
###Code
# Seu código aqui
d0.pop(0)
total = len(d0)
ok.check('e03_3.py')
d0
###Output
_____no_output_____
###Markdown
4 Remoção de Novas LinhasCrie uma nova lista numa variável de nome `d1` onde cada elemento de `d1` corresponda à versão do mesmo elemento em `d0`, removendo o caractere de nova linha `\n` que aparece ao final de cada item. Assegure-se de que cada elemento da lista `d1` seja uma string simples (dependendo como você fez o upload, pode acontecer dos elementos estarem codificados, então decodifique caso necessário).
###Code
# Seu código aqui
d1 = []
for linha in d0:
d1.append(linha[:-1])
ok.check('e03_4.py')
d1
###Output
_____no_output_____
###Markdown
5 Separação por VírgulasCrie uma nova lista em uma variável de nome `d2` onde cada elemento dessa lista seja uma sublista contendo as strings correspondentes a cada dado da string original, usando a vírgula como separador.Por exemplo, transforme isso:`'14,4.3,3.0,1.1,0.1,Iris-setosa'`nisso:`['14', '4.3', '3.0', '1.1', '0.1', 'Iris-setosa']`Faça isso para todos elementos, na mesma ordem da lista original, gravando o resultado em `d2`.
###Code
# Seu código aqui
d2 = []
for linha in d1:
d2.append(linha.split(','))
ok.check('e03_5.py')
d2
###Output
_____no_output_____
###Markdown
6 Entradas de TreinamentoCrie uma matriz no formato `array` do NumPy, numa variável de nome `d3`, contendo todos os dados da segunda, terceira, quarta e quinta colunas de `d2` transformados para o formato ponto flutuante (convertendo de string para número). Não inclua nem a primeira coluna (que é o id), nem a última coluna nessa conversão. Essa será a matriz de entradas.
###Code
import numpy as np
# Escreva seu código aqui
d3 = np.zeros((total, 4))
for i, linha in enumerate(d2[0:]):
_, sl, sw, pl, pw, sp = linha
sl = float(sl)
sw = float(sw)
pl = float(pl)
pw = float(pw)
d3[i:] = np.array([sl, sw, pl, pw])
ok.check('e03_6.py')
d3
###Output
_____no_output_____
###Markdown
7 Saídas DesejadasCrie um `array` do NumPy para as saídas desejadas, no formado _one-hot_. Para isso utilize as strings da última coluna de `d2`, verificando qual espécie de planta se refere cada string: `'Iris-setosa'` $\rightarrow$ `[1.0, 0.0, 0.0]` `'Iris-versicolor'` $\rightarrow$ `[0.0, 1.0, 0.0]` `'Iris-virginica'` $\rightarrow$ `[0.0, 0.0, 1.0]` Grave o resultado em um `array` do NumPy de nome `d4`, com o mesmo número de linhas de `d3`, onde cada linha de `d4` corresponde ao vetor _one-hot_ da saída desejada da respectiva linha de entradas de `d4`.
###Code
# Escreva seu código aqui
d4 = np.zeros((total, 3))
for i, linha in enumerate(d2[0:]):
_, sl, sw, pl, pw, sp = linha
if sp == 'Iris-setosa':
d4[i,:] = np.array([1, 0, 0])
elif sp == 'Iris-versicolor':
d4[i, :] = np.array([0, 1, 0])
elif sp == 'Iris-virginica':
d4[i, :] = np.array([0, 0, 1])
ok.check('e03_7.py')
d4
#outra maneira
# cat = np.array(['Iris-setosa', 'Iris-versicolor', 'Iris-virginica'])
#d4[i:] = (cat== sp).astype('float)
###Output
_____no_output_____
###Markdown
8 Normalização das EntradasEncontre o valor máximo e mínimo para cada uma das quatro colunas da matriz de entradas. Normalize os valores das entradas no intervalo entre zero e um, salvando o resultado em um `array` do NumPy de nome `d5`.
###Code
from sklearn import preprocessing
# Escreva seu código aqui
d5 = np.zeros((total, 4))
d5 = (d3 - np.min(d3, axis=0))/(np.max(d3, axis=0) -np.min(d3, axis=0))
ok.check('e03_8.py')
d5
###Output
_____no_output_____
###Markdown
9 Embaralhamento dos DadosMisture as linhas dos dados dos valores de entradas de treinamento contidos em `d5` de forma aleatória. Faça o mesmo com as linhas das saídas desejadas em `d4`, de forma a manter a correspondência entre ambos. Chame a `array` resultante de dados de treinamento embaralhado de `x` e a `array` correspondente de saídas desejadas de `y`.
###Code
# Escreva seu código aqui
from sklearn.utils import shuffle
x, y = shuffle(d5, d4, random_state = 0)
ok.check('e03_9.py')
###Output
_____no_output_____
###Markdown
10 Separação de Dados de Treinamento e ValidaçãoSepare os pares de dados de treinamento e validação nas proporções 90%/10%, respectivamente. Chame a entrada e saída desejada dos pares de treinamento de `x_train` e `y_train`, e use os nomes `x_test` e `y_test` para os dados de validação. Os dados de validação devem corresponder às linhas finais dos pares de `array` originais `x` e `y`.
###Code
from sklearn.model_selection import train_test_split
# Escreva seu código aqui
x_train = x[:135]
x_test = x[135:]
y_train = y[:135]
y_test = y[135:]
ok.check('e03_10.py')
x_train
y_train
###Output
_____no_output_____ |
02 - Regression 3 - Tuning.ipynb | ###Markdown
Regression - Optimize and save modelsIn the previous notebook, we used complex regression models to look at the relationship between features of a bike rentals dataset. In this notebook, we'll see if we can improve the performance of these models even further.Let's start by loading the bicycle sharing data as a **Pandas** DataFrame and viewing the first few rows. As usual, we'll also split our data into training and test datasets.
###Code
# Import modules we'll need for this notebook
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# load the training dataset
bike_data = pd.read_csv('data/daily-bike-share.csv')
bike_data['day'] = pd.DatetimeIndex(bike_data['dteday']).day
numeric_features = ['temp', 'atemp', 'hum', 'windspeed']
categorical_features = ['season','mnth','holiday','weekday','workingday','weathersit', 'day']
bike_data[numeric_features + ['rentals']].describe()
print(bike_data.head())
# Separate features and labels
# After separating the dataset, we now have numpy arrays named **X** containing the features, and **y** containing the labels.
X, y = bike_data[['season','mnth', 'holiday','weekday','workingday','weathersit','temp', 'atemp', 'hum', 'windspeed']].values, bike_data['rentals'].values
# Split data 70%-30% into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
print ('Training Set: %d rows\nTest Set: %d rows' % (X_train.shape[0], X_test.shape[0]))
###Output
_____no_output_____
###Markdown
Now we have the following four datasets:- **X_train**: The feature values we'll use to train the model- **y_train**: The corresponding labels we'll use to train the model- **X_test**: The feature values we'll use to validate the model- **y_test**: The corresponding labels we'll use to validate the modelNow we're ready to train a model by fitting a *boosting* ensemble algorithm, as in our last notebook. Recall that a Gradient Boosting estimator, is like a Random Forest algorithm, but instead of building them all trees independently and taking the average result, each tree is built on the outputs of the previous one in an attempt to incrementally reduce the *loss* (error) in the model.
###Code
# Train the model
from sklearn.ensemble import GradientBoostingRegressor, RandomForestRegressor
# Fit a lasso model on the training set
model = GradientBoostingRegressor().fit(X_train, y_train)
print (model, "\n")
# Evaluate the model using the test data
predictions = model.predict(X_test)
mse = mean_squared_error(y_test, predictions)
print("MSE:", mse)
rmse = np.sqrt(mse)
print("RMSE:", rmse)
r2 = r2_score(y_test, predictions)
print("R2:", r2)
# Plot predicted vs actual
plt.scatter(y_test, predictions)
plt.xlabel('Actual Labels')
plt.ylabel('Predicted Labels')
plt.title('Daily Bike Share Predictions')
# overlay the regression line
z = np.polyfit(y_test, predictions, 1)
p = np.poly1d(z)
plt.plot(y_test,p(y_test), color='magenta')
plt.show()
###Output
_____no_output_____
###Markdown
Optimize HyperparametersTake a look at the **GradientBoostingRegressor** estimator definition in the output above, and note that it, like the other estimators we tried previously, includes a large number of parameters that control the way the model is trained. In machine learning, the term *parameters* refers to values that can be determined from data; values that you specify to affect the behavior of a training algorithm are more correctly referred to as *hyperparameters*.The specific hyperparameters for an estimator vary based on the algorithm that the estimator encapsulates. In the case of the **GradientBoostingRegressor** estimator, the algorithm is an ensemble that combines multiple decision trees to create an overall predictive model. You can learn about the hyperparameters for this estimator in the [Scikit-Learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingRegressor.html).We won't go into the details of each hyperparameter here, but they work together to affect the way the algorithm trains a model. In many cases, the default values provided by Scikit-Learn will work well; but there may be some advantage in modifying hyperparameters to get better predictive performance or reduce training time.So how do you know what hyperparameter values you should use? Well, in the absence of a deep understanding of how the underlying algorithm works, you'll need to experiment. Fortunately, SciKit-Learn provides a way to *tune* hyperparameters by trying multiple combinations and finding the best result for a given performance metric.Let's try using a *grid search* approach to try combinations from a grid of possible values for the **learning_rate** and **n_estimators** hyperparameters of the **GradientBoostingRegressor** estimator.
###Code
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer, r2_score
# Use a Gradient Boosting algorithm
alg = GradientBoostingRegressor()
# Try these hyperparameter values
params = {
'learning_rate': [0.1, 0.5, 1.0],
'n_estimators' : [50, 100, 150]
}
# Find the best hyperparameter combination to optimize the R2 metric
score = make_scorer(r2_score)
gridsearch = GridSearchCV(alg, params, scoring=score, cv=3, return_train_score=True)
gridsearch.fit(X_train, y_train)
print("Best parameter combination:", gridsearch.best_params_, "\n")
# Get the best model
model=gridsearch.best_estimator_
print(model, "\n")
# Evaluate the model using the test data
predictions = model.predict(X_test)
mse = mean_squared_error(y_test, predictions)
print("MSE:", mse)
rmse = np.sqrt(mse)
print("RMSE:", rmse)
r2 = r2_score(y_test, predictions)
print("R2:", r2)
# Plot predicted vs actual
plt.scatter(y_test, predictions)
plt.xlabel('Actual Labels')
plt.ylabel('Predicted Labels')
plt.title('Daily Bike Share Predictions')
# overlay the regression line
z = np.polyfit(y_test, predictions, 1)
p = np.poly1d(z)
plt.plot(y_test,p(y_test), color='magenta')
plt.show()
###Output
_____no_output_____
###Markdown
> **Note**: The use of random values in the Gradient Boosting algorithm results in slightly different metrics each time. In this case, the best model produced by hyperparameter tuning is unlikely to be significantly better than one trained with the default hyperparameter values; but it's still useful to know about the hyperparameter tuning technique! Preprocess the DataWe trained a model with data that was loaded straight from a source file, with only moderately successful results.In practice, it's common to perform some preprocessing of the data to make it easier for the algorithm to fit a model to it. There's a huge range of preprocessing transformations you can perform to get your data ready for modeling, but we'll limit ourselves to a few common techniques: Scaling numeric featuresNormalizing numeric features so they're on the same scale prevents features with large values from producing coefficients that disproportionately affect the predictions. For example, suppose your data includes the following numeric features:| A | B | C || - | --- | --- || 3 | 480 | 65 | Normalizing these features to the same scale may result in the following values (assuming A contains values from 0 to 10, B contains values from 0 to 1000, and C contains values from 0 to 100):| A | B | C || -- | --- | --- || 0.3 | 0.48| 0.65|There are multiple ways you can scale numeric data, such as calculating the minimum and maximum values for each column and assigning a proportional value between 0 and 1, or by using the mean and standard deviation of a normally distributed variable to maintain the same *spread* of values on a different scale. Encoding categorical variablesMachine learning models work best with numeric features rather than text values, so you generally need to convert categorical features into numeric representations. For example, suppose your data includes the following categorical feature. | Size || ---- || S || M || L |You can apply *ordinal encoding* to substitute a unique integer value for each category, like this:| Size || ---- || 0 || 1 || 2 |Another common technique is to use *one hot encoding* to create individual binary (0 or 1) features for each possible category value. For example, you could use one-hot encoding to translate the possible categories into binary columns like this:| Size_S | Size_M | Size_L || ------- | -------- | -------- || 1 | 0 | 0 || 0 | 1 | 0 || 0 | 0 | 1 |To apply these preprocessing transformations to the bike rental, we'll make use of a Scikit-Learn feature named *pipelines*. These enable us to define a set of preprocessing steps that end with an algorithm. You can then fit the entire pipeline to the data, so that the model encapsulates all of the preprocessing steps as well as the regression algorithm. This is useful, because when we want to use the model to predict values from new data, we need to apply the same transformations (based on the same statistical distributions and category encodings used with the training data).>**Note**: The term *pipeline* is used extensively in machine learning, often to mean very different things! In this context, we're using it to refer to pipeline objects in Scikit-Learn, but you may see it used elsewhere to mean something else.
###Code
# Train the model
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.linear_model import LinearRegression
import numpy as np
# Define preprocessing for numeric columns (scale them)
numeric_features = [6,7,8,9]
numeric_transformer = Pipeline(steps=[
('scaler', StandardScaler())])
# Define preprocessing for categorical features (encode them)
categorical_features = [0,1,2,3,4,5]
categorical_transformer = Pipeline(steps=[
('onehot', OneHotEncoder(handle_unknown='ignore'))])
# Combine preprocessing steps
preprocessor = ColumnTransformer(
transformers=[
('num', numeric_transformer, numeric_features),
('cat', categorical_transformer, categorical_features)])
# Create preprocessing and training pipeline
pipeline = Pipeline(steps=[('preprocessor', preprocessor),
('regressor', GradientBoostingRegressor())])
# fit the pipeline to train a linear regression model on the training set
model = pipeline.fit(X_train, (y_train))
print (model)
###Output
_____no_output_____
###Markdown
OK, the model is trained, including the preprocessing steps. Let's see how it performs with the validation data.
###Code
# Get predictions
predictions = model.predict(X_test)
# Display metrics
mse = mean_squared_error(y_test, predictions)
print("MSE:", mse)
rmse = np.sqrt(mse)
print("RMSE:", rmse)
r2 = r2_score(y_test, predictions)
print("R2:", r2)
# Plot predicted vs actual
plt.scatter(y_test, predictions)
plt.xlabel('Actual Labels')
plt.ylabel('Predicted Labels')
plt.title('Daily Bike Share Predictions')
z = np.polyfit(y_test, predictions, 1)
p = np.poly1d(z)
plt.plot(y_test,p(y_test), color='magenta')
plt.show()
###Output
_____no_output_____
###Markdown
The pipeline is composed of the transformations and the algorithm used to train the model. To try an alternative algorithm you can just change that step to a different kind of estimator.
###Code
# Use a different estimator in the pipeline
pipeline = Pipeline(steps=[('preprocessor', preprocessor),
('regressor', RandomForestRegressor())])
# fit the pipeline to train a linear regression model on the training set
model = pipeline.fit(X_train, (y_train))
print (model, "\n")
# Get predictions
predictions = model.predict(X_test)
# Display metrics
mse = mean_squared_error(y_test, predictions)
print("MSE:", mse)
rmse = np.sqrt(mse)
print("RMSE:", rmse)
r2 = r2_score(y_test, predictions)
print("R2:", r2)
# Plot predicted vs actual
plt.scatter(y_test, predictions)
plt.xlabel('Actual Labels')
plt.ylabel('Predicted Labels')
plt.title('Daily Bike Share Predictions - Preprocessed')
z = np.polyfit(y_test, predictions, 1)
p = np.poly1d(z)
plt.plot(y_test,p(y_test), color='magenta')
plt.show()
###Output
_____no_output_____
###Markdown
We've now seen a number of common techniques used to train predictive models for regression. In a real project, you'd likely try a few more algorithms, hyperparameters, and preprocessing transformations; but by now you should have got the general idea. Let's explore how you can use the trained model with new data. Use the Trained ModelFirst, let's save the model.
###Code
import joblib
# Save the model as a pickle file
filename = './models/bike-share.pkl'
joblib.dump(model, filename)
###Output
_____no_output_____
###Markdown
Now, we can load it whenever we need it, and use it to predict labels for new data. This is often called *scoring* or *inferencing*.
###Code
# Load the model from the file
loaded_model = joblib.load(filename)
# Create a numpy array containing a new observation (for example tomorrow's seasonal and weather forecast information)
X_new = np.array([[1,1,0,3,1,1,0.226957,0.22927,0.436957,0.1869]]).astype('float64')
print ('New sample: {}'.format(list(X_new[0])))
# Use the model to predict tomorrow's rentals
result = loaded_model.predict(X_new)
print('Prediction: {:.0f} rentals'.format(np.round(result[0])))
###Output
_____no_output_____
###Markdown
The model's **predict** method accepts an array of observations, so you can use it to generate multiple predictions as a batch. For example, suppose you have a weather forecast for the next five days; you could use the model to predict bike rentals for each day based on the expected weather conditions.
###Code
# An array of features based on five-day weather forecast
X_new = np.array([[0,1,1,0,0,1,0.344167,0.363625,0.805833,0.160446],
[0,1,0,1,0,1,0.363478,0.353739,0.696087,0.248539],
[0,1,0,2,0,1,0.196364,0.189405,0.437273,0.248309],
[0,1,0,3,0,1,0.2,0.212122,0.590435,0.160296],
[0,1,0,4,0,1,0.226957,0.22927,0.436957,0.1869]])
# Use the model to predict rentals
results = loaded_model.predict(X_new)
print('5-day rental predictions:')
for prediction in results:
print(np.round(prediction))
###Output
_____no_output_____ |
loan_defection_EDA_and_imbalance_dataset.ipynb | ###Markdown
I - Import Libraries
###Code
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
II - EDA
###Code
applicant_df = pd.read_csv('/content/drive/My Drive/Colab Notebooks/ml_classification/applicant.csv')
print('Shape of applicant.csv: ', applicant_df.shape)
applicant_df.head()
loan_df = pd.read_csv('/content/drive/My Drive/Colab Notebooks/ml_classification/loan.csv')
print('Shape of loan.csv: ', loan_df.shape)
loan_df.head()
combined_df = applicant_df.merge(loan_df, on = 'applicant_id')
print('Shape of the dataframe: ', combined_df.shape)
combined_df.head()
###Output
Shape of the dataframe: (1000, 27)
###Markdown
* As observed from the combined dataset of applicant and loan csv, only the columns with string type have missing data
###Code
combined_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1000 entries, 0 to 999
Data columns (total 27 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 applicant_id 1000 non-null int64
1 Primary_applicant_age_in_years 1000 non-null int64
2 Gender 1000 non-null object
3 Marital_status 1000 non-null object
4 Number_of_dependents 1000 non-null int64
5 Housing 1000 non-null object
6 Years_at_current_residence 1000 non-null int64
7 Employment_status 1000 non-null object
8 Has_been_employed_for_at_least 938 non-null object
9 Has_been_employed_for_at_most 747 non-null object
10 Telephone 404 non-null object
11 Foreign_worker 1000 non-null int64
12 Savings_account_balance 817 non-null object
13 Balance_in_existing_bank_account_(lower_limit_of_bucket) 332 non-null object
14 Balance_in_existing_bank_account_(upper_limit_of_bucket) 543 non-null object
15 loan_application_id 1000 non-null object
16 Months_loan_taken_for 1000 non-null int64
17 Purpose 988 non-null object
18 Principal_loan_amount 1000 non-null int64
19 EMI_rate_in_percentage_of_disposable_income 1000 non-null int64
20 Property 846 non-null object
21 Has_coapplicant 1000 non-null int64
22 Has_guarantor 1000 non-null int64
23 Other_EMI_plans 186 non-null object
24 Number_of_existing_loans_at_this_bank 1000 non-null int64
25 Loan_history 1000 non-null object
26 high_risk_applicant 1000 non-null int64
dtypes: int64(12), object(15)
memory usage: 218.8+ KB
###Markdown
* Number of null values in a column, if it has any null values* format of the result is * (column_name, number of null values in column_name)
###Code
missing_value_count = combined_df.isna().sum()
column_names = list(combined_df.columns)
[(column_names[index], value) for index, value in enumerate(missing_value_count) if value>0]
###Output
_____no_output_____
###Markdown
* Percentage of missing values in each column, when compared to total values
###Code
[(column_names[index], round(((value/len(combined_df))*100),2))
for index, value in enumerate(missing_value_count) if value>0]
###Output
_____no_output_____
###Markdown
Inference from Data* Mean age of the applicant is 35.55, and median age of the applicant age is 33* Number of dependents is around 1 for both mean and median* Average duration of loan taken is around 21 months, and median for loan duration taken is 18 months.* Average principal amount of loan is [3,271,258], whereas the median principal amount is [2,319,500], since there is difference in median and mean, with mean higher than median, there are few loans with very high amount which are pushing the average higher than almost 1 million higher than the median value.
###Code
combined_df.iloc[:,1:].describe().transpose()
###Output
_____no_output_____
###Markdown
* There is class imbalance observed in the classification label dataset, there are 700 low risk applicants, compared to 300 high risk applicants. Are young people more creditworthy? * Just by looking at the age of the customer, it is difficult to understand the credit worthiness, since there are more number of young people in less credit risk than more credit risk * There are 263 applicants below 30 with low risk and 148 applicants below 30 with high risk, hence just by age, it is difficult to identify the risk category. * However in our dataset, we have 700 low risk applicants and 300 high risk applicants, almost 50% of high risk applicants are below the age of 30, whereas only 37.5 % of low risk applicants are below age of 30. * This dataset has 406 applicants below age of 30, out of which around 35% of the applicants are high risk. Of remaining 594 applicants who are above the age of 30, there are around 26% of applicants are high risk applicants. * Around 71% of the low risk applicants are below the age of 40, and 76% of high risk applicants are below age of 40.
###Code
df_try = (combined_df.groupby(['high_risk_applicant','Primary_applicant_age_in_years']).agg(count_hig_risk_applicants = ('high_risk_applicant', len)))
df_try.count_hig_risk_applicants.unstack(0).fillna(0).plot(kind='bar',subplots=True, layout=(1,2), figsize = (30,10))
df_below_30 = (df_try[df_try.index.isin(list(range(31)), level=1)])
print('Number of low risk applicants below 30: ', (df_below_30[df_below_30.index.isin(list(range(1)), level=0)]).sum())
print('Number of high risk applicants below 30: ', (df_below_30[df_below_30.index.isin(list(range(1,2)), level=0)]).sum())
df_between_30_40 = (df_try[df_try.index.isin(list(range(31,41)), level=1)])
print('Number of low risk applicants between 30 and 40: ', (df_between_30_40[df_between_30_40.index.isin(list(range(1)), level=0)]).sum())
print('Number of high risk applicants between 30 and 40: ', (df_between_30_40[df_between_30_40.index.isin(list(range(1,2)), level=0)]).sum())
df_below_40 = (df_try[df_try.index.isin(list(range(41)), level=1)])
print('Number of low risk applicants below 40: ', (df_below_40[df_below_40.index.isin(list(range(1)), level=0)]).sum())
print('Number of high risk applicants below 40: ', (df_below_40[df_below_40.index.isin(list(range(1,2)), level=0)]).sum())
df_try = (combined_df.groupby(['high_risk_applicant','Primary_applicant_age_in_years']).agg(count_hig_risk_applicants = ('high_risk_applicant', len)))
(df_try[df_try.index.isin(list(range(31)), level=1)]).plot.bar(figsize = (15,10))
df_try = (combined_df.groupby(['high_risk_applicant','Principal_loan_amount']).agg(count_hig_risk_applicants = ('high_risk_applicant', len)))
df_try
###Output
_____no_output_____
###Markdown
Would a person with critical credit history be more creditworthy? * Based on just past credit history, it is quite difficult to predict an applicant if he/she is high risk or low risk, since number of applicants with 'delay in paying off loans in the past' are a less percentage of the overall high risk category. * The highest category in both low risk and high risk applicants is the category of 'existing loans paid back till now', since this is the case, the credit worhtiness feature on a single note may not be highly correlated.
###Code
df_try_loan_history = (combined_df.groupby(['high_risk_applicant','Loan_history']).agg(count_hig_risk_applicants = ('Loan_history', len)))
df_try_loan_history.count_hig_risk_applicants.unstack(0).fillna(0).plot(kind='bar',subplots=True, layout=(1,2), figsize = (30,10))
###Output
_____no_output_____
###Markdown
Would a person with more credit accounts be more creditworthy?* As per the visuals below people with just one credit account are less trustworthy, but this may also be observed due the imbalance in the high risk and low risk categories
###Code
df_try = (combined_df.groupby(['high_risk_applicant','Number_of_existing_loans_at_this_bank']).agg(count_hig_risk_applicants = ('high_risk_applicant', len)))
df_try.count_hig_risk_applicants.unstack(0).fillna(0).plot(kind='bar',subplots=True, layout=(1,2), figsize = (25,10))
###Output
_____no_output_____
###Markdown
III - Correlation of Features towards risk category
###Code
object_column_names = (combined_df.loc[:, combined_df.dtypes == np.object]).columns
numeric_column_names = (combined_df.loc[:, combined_df.dtypes == np.int64]).columns
combined_df[numeric_column_names[1:]].corrwith(combined_df['high_risk_applicant'])
combined_df.iloc[:,1:].drop('high_risk_applicant', axis=1).plot(kind='box', subplots=True, layout=(5,5), sharex=False, sharey=False, figsize=(15,15),
title='Box Plot for each input variable')
plt.show()
import pylab as pl
combined_df.iloc[:,1:].drop('high_risk_applicant' ,axis=1).hist(bins=30, figsize=(15,15))
pl.suptitle("Histogram for each numeric input variable")
plt.show()
###Output
_____no_output_____
###Markdown
* Correlation chart
###Code
updated_data_1 = combined_df.iloc[:,1:].drop(columns = object_column_names)
corr = updated_data_1.corr()
fig = plt.figure(figsize = (25,10))
ax = fig.add_subplot(1,1,1)
cax = ax.matshow(corr,cmap='coolwarm', vmin=-1, vmax=1)
fig.colorbar(cax)
ticks = np.arange(0,len(updated_data_1.columns),1)
ax.set_xticks(ticks)
plt.xticks(rotation=90)
ax.set_yticks(ticks)
ax.set_xticklabels(updated_data_1.columns)
ax.set_yticklabels(updated_data_1.columns)
plt.show()
###Output
_____no_output_____
###Markdown
* Heatmap
###Code
plt.figure(figsize=(20,20))
sns.heatmap(combined_df.iloc[:,1:].corr(), annot=True)
plt.show()
###Output
_____no_output_____
###Markdown
IV - Handling missing values
###Code
missing_value_count = combined_df.isna().sum()
column_names = list(combined_df.columns)
missing_null_columns = []
[(missing_null_columns.append(column_names[index]),column_names[index], value) for index, value in enumerate(missing_value_count) if value>0]
[(column_names[index], round(((value/len(combined_df))*100),2))
for index, value in enumerate(missing_value_count) if value>0]
###Output
_____no_output_____
###Markdown
* For columns with less than 20% of the rows, we will fill with the mode.* For 'Other_EMI_plans', it has more than 800 rows missing out of 1000 rows. * Also checked for the pattern if the 200 rows are only for high risk or low risk applicants, which was not the case, hence this column will be deleted.* For 'Telephone', it has more than 600 rows approximately missing and there is no pattern that Telephone was available for high risk or low risk applicant, hence this column will also be dropped.* The columns 'Has_been_employed_for_at_least' and 'Has_been_employed_for_at_most' are mutually exclusive, hence will take the average of those two columns in a new column* 'Property', 'Purpose' have very less null values, hence they will be filled up with the mode or most frequently occuring value.* 'Balance_in_existing_bank_account_lower_limit' and 'Balance_in_existing_bank_account_upper_limit' are not mutually exclusive, hence have decided to delete these columns, since number of rows where both the columns are null is also around 400 rows, and the amount is either 0 or 2 lacs, hence this column may not contribute much to the high risk category. * Creation of average employment years for the applicant
###Code
employed_at_least = combined_df['Has_been_employed_for_at_least'].str.extract('(\d+)')
employed_at_most = combined_df['Has_been_employed_for_at_most'].str.extract('(\d+)')
employed_at_least.fillna(value = 0, inplace = True)
employed_at_most.fillna(value = 0, inplace = True)
employed_at_least = pd.to_numeric(employed_at_least.iloc[:,0])
employed_at_most = pd.to_numeric(employed_at_most.iloc[:,0])
employed_df = pd.concat([employed_at_least, employed_at_most], axis = 1)
employed_df['average'] = 0.5 * (employed_df.iloc[:,0] + employed_df.iloc[:,1])
combined_df['Average_employment_years'] = employed_df['average']
###Output
_____no_output_____
###Markdown
* Dropping of 'Other_EMI_plans', 'Telephone', 'Balance_in_existing_bank_account_(lower_limit_of_bucket)' and 'Balance_in_existing_bank_account_(upper_limit_of_bucket)'
###Code
combined_df = combined_df.drop(labels = ['Other_EMI_plans',
'Telephone',
'Balance_in_existing_bank_account_(lower_limit_of_bucket)',
'Balance_in_existing_bank_account_(upper_limit_of_bucket)',
'Has_been_employed_for_at_least',
'Has_been_employed_for_at_most',
'loan_application_id'],
axis = 1)
combined_df.head()
missing_value_count = combined_df.isna().sum()
column_names = list(combined_df.columns)
missing_null_columns = []
[(missing_null_columns.append(column_names[index]),column_names[index], value) for index, value in enumerate(missing_value_count) if value>0]
listed_columns_mode_fill = list(combined_df.columns)
for i in range(len(listed_columns_mode_fill)):
a = len(combined_df[listed_columns_mode_fill[i]].unique())
# print("column: ", i, " number of unique elements are: ", a )
if a <= 10:
combined_df[listed_columns_mode_fill[i]].fillna(combined_df[listed_columns_mode_fill[i]].mode()[0], inplace=True)
combined_df.head()
combined_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1000 entries, 0 to 999
Data columns (total 21 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 applicant_id 1000 non-null int64
1 Primary_applicant_age_in_years 1000 non-null int64
2 Gender 1000 non-null object
3 Marital_status 1000 non-null object
4 Number_of_dependents 1000 non-null int64
5 Housing 1000 non-null object
6 Years_at_current_residence 1000 non-null int64
7 Employment_status 1000 non-null object
8 Foreign_worker 1000 non-null int64
9 Savings_account_balance 1000 non-null object
10 Months_loan_taken_for 1000 non-null int64
11 Purpose 1000 non-null object
12 Principal_loan_amount 1000 non-null int64
13 EMI_rate_in_percentage_of_disposable_income 1000 non-null int64
14 Property 1000 non-null object
15 Has_coapplicant 1000 non-null int64
16 Has_guarantor 1000 non-null int64
17 Number_of_existing_loans_at_this_bank 1000 non-null int64
18 Loan_history 1000 non-null object
19 high_risk_applicant 1000 non-null int64
20 Average_employment_years 1000 non-null float64
dtypes: float64(1), int64(12), object(8)
memory usage: 211.9+ KB
###Markdown
V - Handling categorical columns
###Code
object_column_names = (combined_df.loc[:, combined_df.dtypes == np.object]).columns
combined_df[object_column_names].head()
for i in object_column_names:
labelencoder = LabelEncoder()
combined_df[i] = labelencoder.fit_transform(combined_df[i])
combined_df.info()
combined_df.head()
###Output
_____no_output_____
###Markdown
VI - Handling imbalanced classification labels
###Code
(combined_df.groupby('high_risk_applicant').size())
import seaborn as sns
sns.countplot(combined_df['high_risk_applicant'],label="Count")
plt.title('Classification of High Risk and Low Risk applicants')
plt.show()
# a = list(applicant_df['Housing'].unique())
# b = list(applicant_df['Housing'].value_counts())
# [(a[c],b[c]) for c in range(len(a))]
###Output
_____no_output_____
###Markdown
* Swapping the columns, such that the last column is the dependent variable
###Code
cols = list(combined_df.columns)
a, b = cols.index('high_risk_applicant'), cols.index('Average_employment_years')
cols[b], cols[a] = cols[a], cols[b]
combined_df = combined_df[cols]
combined_df.head()
# class count
class_count_0, class_count_1 = combined_df['high_risk_applicant'].value_counts()
# Separate class
class_0 = combined_df[combined_df['high_risk_applicant'] == 0]
class_1 = combined_df[combined_df['high_risk_applicant'] == 1]# print the shape of the class
print('high_risk_applicant 0:', class_0.shape)
print('high_risk_applicant 1:', class_1.shape)
###Output
high_risk_applicant 0: (700, 21)
high_risk_applicant 1: (300, 21)
###Markdown
Random Undersampling
###Code
class_0_under = class_0.sample(class_count_1)
test_under = pd.concat([class_0_under, class_1], axis=0)
print("total class of 1 and 0:",test_under['high_risk_applicant'].value_counts())# plot the count after under-sampeling
test_under['high_risk_applicant'].value_counts().plot(kind='bar', title='count (target)')
###Output
total class of 1 and 0: 1 300
0 300
Name: high_risk_applicant, dtype: int64
###Markdown
Random oversampling
###Code
class_1_over = class_1.sample(class_count_0, replace = True)
test_over = pd.concat([class_1_over, class_0], axis=0)
print("total class of 1 and 0:",test_over['high_risk_applicant'].value_counts())
test_over['high_risk_applicant'].value_counts().plot(kind='bar', title='count (target)')
###Output
total class of 1 and 0: 1 700
0 700
Name: high_risk_applicant, dtype: int64
###Markdown
Handling imbalance with imblearn library
###Code
import imblearn
from imblearn.under_sampling import TomekLinks
from imblearn.over_sampling import SMOTE
from imblearn.under_sampling import NearMiss
from collections import Counter
###Output
/usr/local/lib/python3.6/dist-packages/sklearn/externals/six.py:31: FutureWarning: The module is deprecated in version 0.21 and will be removed in version 0.23 since we've dropped support for Python 2.7. Please rely on the official version of six (https://pypi.org/project/six/).
"(https://pypi.org/project/six/).", FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.neighbors.base module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.neighbors. Anything that cannot be imported from sklearn.neighbors is now part of the private API.
warnings.warn(message, FutureWarning)
###Markdown
Tomek links are pairs of very close instances of high risk and low risk applicant, this will try to remove all the instances which are close to each other, hence the remaining data will be further apart, and this will be helpful in the classification process.
###Code
combined_array = combined_df.to_numpy(dtype = 'int32')
# X = combined_array[:,1:-1]
# y = combined_array[:,-1]
X = combined_df.iloc[:,1:-1]
y = combined_df.iloc[:,-1]
tome_links = TomekLinks(sampling_strategy = 'majority')
x_tome_links, y_tome_links = tome_links.fit_resample(X, y)
print('Original dataset shape', Counter(y))
print('Resample dataset shape', Counter(y_tome_links))
###Output
Original dataset shape Counter({0: 700, 1: 300})
Resample dataset shape Counter({0: 570, 1: 300})
###Markdown
Synthetic Minority Oversampling Technique (SMOTE)SMOTE algorithm works in 4 simple steps:* It selects the minority class, which is high risk applicant in our case* It finds its k nearest neighbors, and places a synthetic point on the line jointing the point under consideration and its chose neighbor.* This process is repeated till the high risk applicant class is equal to low risk applicant.
###Code
smote = SMOTE()
x_smote, y_smote = smote.fit_resample(X, y)
print('Original dataset shape', Counter(y))
print('Resample dataset shape', Counter(y_smote))
###Output
Original dataset shape Counter({0: 700, 1: 300})
Resample dataset shape Counter({0: 700, 1: 700})
###Markdown
NearMiss is a technique which focusses on reducing the majority class, which is low risk applicant in our case, and the end result will have low risk applicants equal to high risk applicants
###Code
near_miss = NearMiss()
x_near_miss, y_near_miss = near_miss.fit_resample(X, y)
print('Original dataset shape', Counter(y))
print('Resample dataset shape', Counter(y_near_miss))
###Output
Original dataset shape Counter({0: 700, 1: 300})
Resample dataset shape Counter({0: 300, 1: 300})
###Markdown
VII - Machine Learning Algorithms Baseline Algorithm [ ZeroR]
###Code
from sklearn.dummy import DummyClassifier
zeroR_training = []
zeroR_testing = []
X_train, X_test, y_train, y_test = train_test_split(x_near_miss, y_near_miss, test_size=0.2, random_state=42)
zero_r = DummyClassifier(strategy="most_frequent")
zero_r.fit(X_train, y_train)
print(round((zero_r.score(X_test, y_test)*100),2))
###Output
48.33
###Markdown
Evaluation metrics* Accuracy is not the best metric to use when evaluating imbalanced datasets as it can be misleading.* Metrics that can provide better insight are: * __Confusion Matrix__: a table showing correct predictions and types of incorrect predictions. * __Precision__: the number of true positives divided by all positive predictions. Precision is also called Positive Predictive Value. It is a measure of a classifier’s exactness. Low precision indicates a high number of false positives. * __Recall__: the number of true positives divided by the number of positive values in the test data. The recall is also called Sensitivity or the True Positive Rate. It is a measure of a classifier’s completeness. Low recall indicates a high number of false negatives. * __F1__: Score: the weighted average of precision and recall. * __Area Under ROC Curve (AUROC)__: AUROC represents the likelihood of your model distinguishing observations from two classes. In other words, if you randomly select one observation from each class, what’s the probability that your model will be able to “rank” them correctly?* Will use the AUROC to check the models performance, since the test dataset may contain imbalanced dataset, and this metric gives what is the probability of detecting a high risk applicant, since problem statement states that it is highly recommended to predict the high risk applicant accurately. Logistic Regression
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import scorer, accuracy_score, f1_score, confusion_matrix, roc_auc_score
def model_result(X_train, X_test, y_train, y_test, model):
model_1 = model
model_1.fit(X_train, y_train)
print('Accuracy: {}%'.format(round(accuracy_score(model_1.predict(X_test), y_test) * 100),2))
print('ROCAUC score:{}%'.format(round(roc_auc_score(y_test, model_1.predict(X_test))*100, 2)))
print('F1 score: {}% '.format(round(f1_score(y_test, model_1.predict(X_test))*100,2)))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train_near_miss, X_test_near_miss, y_train_near_miss, y_test_near_miss = train_test_split(x_near_miss, y_near_miss, test_size=0.2, random_state=42)
X_train_tome_links, X_test_tome_links, y_train_tome_links, y_test_tome_links = train_test_split(x_tome_links, y_tome_links, test_size=0.2, random_state=42)
X_train_smote, X_test_smote, y_train_smote, y_test_smote = train_test_split(x_smote, y_smote, test_size=0.2, random_state=42)
print('Results of imbbalanced dataset: ')
model_result(X_train, X_test, y_train, y_test, model = LogisticRegression())
print()
print('Results of nearMiss balancing: ')
model_result(X_train_near_miss, X_test_near_miss, y_train_near_miss, y_test_near_miss, model = LogisticRegression())
print()
print('Results of TomeLinks balancing: ')
model_result(X_train_tome_links, X_test_tome_links, y_train_tome_links, y_test_tome_links, model = LogisticRegression())
print()
print('Results of SMOTE balancing: ')
model_result(X_train_smote, X_test_smote, y_train_smote, y_test_smote, model = LogisticRegression())
###Output
Results of imbbalanced dataset:
Accuracy: 70.0%
ROCAUC score:50.0%
F1 score: 0.0%
Results of nearMiss balancing:
Accuracy: 48.0%
ROCAUC score:50.0%
F1 score: 65.17%
Results of TomeLinks balancing:
Accuracy: 64.0%
ROCAUC score:50.0%
F1 score: 0.0%
Results of SMOTE balancing:
Accuracy: 47.0%
ROCAUC score:50.0%
F1 score: 63.75%
###Markdown
* As observed the imbalanced dataset has higher accuracy, but the area under ROC curve is higher for balanced dataset or modified balanced dataset Support Vector Machine Algorithm
###Code
from sklearn.svm import SVC
print('Results of imbbalanced dataset: ')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
model_result(X_train, X_test, y_train, y_test, model = SVC(class_weight='balanced', probability=True))
print()
print('Results of nearMiss balancing: ')
X_train, X_test, y_train, y_test = train_test_split(x_near_miss, y_near_miss, test_size=0.2, random_state=42)
model_result(X_train, X_test, y_train, y_test, model = SVC(class_weight='balanced', probability=True))
print()
print('Results of TomeLinks balancing: ')
X_train, X_test, y_train, y_test = train_test_split(x_tome_links, y_tome_links, test_size=0.2, random_state=42)
model_result(X_train, X_test, y_train, y_test, model = SVC(class_weight='balanced', probability=True))
print()
print('Results of SMOTE balancing: ')
X_train, X_test, y_train, y_test = train_test_split(x_smote, y_smote, test_size=0.2, random_state=42)
model_result(X_train, X_test, y_train, y_test, model = SVC(class_weight='balanced', probability=True))
###Output
Results of imbbalanced dataset:
Accuracy: 68.0%
ROCAUC score:53.79%
F1 score: 26.97%
Results of nearMiss balancing:
Accuracy: 68.0%
ROCAUC score:66.66%
F1 score: 55.17%
Results of TomeLinks balancing:
Accuracy: 67.0%
ROCAUC score:58.43%
F1 score: 38.3%
Results of SMOTE balancing:
Accuracy: 61.0%
ROCAUC score:59.37%
F1 score: 44.1%
###Markdown
XGB Classifier
###Code
from xgboost import XGBClassifier
print('Results of imbbalanced dataset: ')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
model_result(X_train, X_test, y_train, y_test, model = XGBClassifier())
print()
print('Results of nearMiss balancing: ')
X_train, X_test, y_train, y_test = train_test_split(x_near_miss, y_near_miss, test_size=0.2, random_state=42)
model_result(X_train, X_test, y_train, y_test, model = XGBClassifier())
print()
print('Results of TomeLinks balancing: ')
X_train, X_test, y_train, y_test = train_test_split(x_tome_links, y_tome_links, test_size=0.2, random_state=42)
model_result(X_train, X_test, y_train, y_test, model = XGBClassifier())
print()
print('Results of SMOTE balancing: ')
X_train, X_test, y_train, y_test = train_test_split(x_smote, y_smote, test_size=0.2, random_state=42)
model_result(X_train, X_test, y_train, y_test, model = XGBClassifier())
###Output
Results of imbbalanced dataset:
Accuracy: 72.0%
ROCAUC score:57.47%
F1 score: 31.71%
Results of nearMiss balancing:
Accuracy: 79.0%
ROCAUC score:78.95%
F1 score: 77.06%
Results of TomeLinks balancing:
Accuracy: 71.0%
ROCAUC score:63.64%
F1 score: 48.48%
Results of SMOTE balancing:
Accuracy: 80.0%
ROCAUC score:79.35%
F1 score: 77.47%
###Markdown
RandomForest Classifier
###Code
from sklearn.ensemble import RandomForestClassifier
print('Results of imbbalanced dataset: ')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
model_result(X_train, X_test, y_train, y_test, model = RandomForestClassifier())
print()
print('Results of nearMiss balancing: ')
X_train, X_test, y_train, y_test = train_test_split(x_near_miss, y_near_miss, test_size=0.2, random_state=42)
model_result(X_train, X_test, y_train, y_test, model = RandomForestClassifier())
print()
print('Results of TomeLinks balancing: ')
X_train, X_test, y_train, y_test = train_test_split(x_tome_links, y_tome_links, test_size=0.2, random_state=42)
model_result(X_train, X_test, y_train, y_test, model = RandomForestClassifier())
print()
print('Results of SMOTE balancing: ')
X_train, X_test, y_train, y_test = train_test_split(x_smote, y_smote, test_size=0.2, random_state=42)
model_result(X_train, X_test, y_train, y_test, model = RandomForestClassifier())
###Output
Results of imbbalanced dataset:
Accuracy: 72.0%
ROCAUC score:57.83%
F1 score: 32.1%
Results of nearMiss balancing:
Accuracy: 76.0%
ROCAUC score:75.56%
F1 score: 72.9%
Results of TomeLinks balancing:
Accuracy: 75.0%
ROCAUC score:67.93%
F1 score: 54.74%
Results of SMOTE balancing:
Accuracy: 82.0%
ROCAUC score:81.27%
F1 score: 79.01%
###Markdown
* Based on the results the balanced model after SMOTE with Random forest model is giving the most optimal results.* However with hypter parameter optimization, the probability of predicting a high risk or low risk applicant can improve. Hyperparameter Optimization Logistic Regression
###Code
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.model_selection import GridSearchCV
import warnings
warnings.filterwarnings('ignore')
# define models and parameters
model = LogisticRegression()
solvers = ['newton-cg', 'lbfgs', 'liblinear']
penalty = ['l2']
c_values = [100, 10, 1.0, 0.1, 0.01]
# define grid search
grid = dict(solver = solvers,
penalty = penalty,
C = c_values)
cv = RepeatedStratifiedKFold(n_splits = 10,
n_repeats = 3,
random_state = 1)
grid_search = GridSearchCV(estimator = model,
param_grid = grid,
n_jobs = -1,
cv = cv,
scoring = 'roc_auc',
error_score = 0)
def get_hyper_results(grid_search, X_train, y_train, logreg):
grid_result = grid_search.fit(X_train, y_train)
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
logreg.fit(X_train, y_train)
return round(grid_result.best_score_*100, 2)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train_near_miss, X_test_near_miss, y_train_near_miss, y_test_near_miss = train_test_split(x_near_miss, y_near_miss, test_size=0.2, random_state=42)
X_train_tome_links, X_test_tome_links, y_train_tome_links, y_test_tome_links = train_test_split(x_tome_links, y_tome_links, test_size=0.2, random_state=42)
X_train_smote, X_test_smote, y_train_smote, y_test_smote = train_test_split(x_smote, y_smote, test_size=0.2, random_state=42)
logreg_imbalanced = LogisticRegression()
logreg_near_miss = LogisticRegression()
logreg_tome_links = LogisticRegression()
logreg_smote = LogisticRegression()
logistic_hyper = [get_hyper_results(grid_search, X_train, y_train, logreg_imbalanced),
get_hyper_results(grid_search, X_train_near_miss, y_train_near_miss, logreg_near_miss),
get_hyper_results(grid_search, X_train_tome_links, y_train_tome_links, logreg_tome_links),
get_hyper_results(grid_search, X_train_smote, y_train_smote, logreg_smote)]
print('Area under ROC for imbalanced dataset: {}%'.format(logistic_hyper[0]))
print('Area under ROC for near miss balanced dataset: {}%'.format(logistic_hyper[1]))
print('Area under ROC for tome links balanced dataset: {}%'.format(logistic_hyper[2]))
print('Area under ROC for SMOTE balanced dataset: {}%'.format(logistic_hyper[3]))
###Output
Area under ROC for imbalanced dataset: 67.22%
Area under ROC for near miss balanced dataset: 77.06%
Area under ROC for tome links balanced dataset: 66.28%
Area under ROC for SMOTE balanced dataset: 68.54%
###Markdown
Support Vector Machine Algorithm
###Code
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.model_selection import GridSearchCV
svm_testing_hyperparameter = []
# define model and parameters
model = SVC()
kernel = ['poly', 'rbf', 'sigmoid']
C = [50, 10, 1.0, 0.1, 0.01]
gamma = ['scale']
# define grid search
grid = dict(kernel = kernel,
C = C,
gamma = gamma)
cv = RepeatedStratifiedKFold(n_splits = 10,
n_repeats = 3,
random_state = 1)
grid_search = GridSearchCV(estimator = model,
param_grid = grid,
n_jobs = -1,
cv = cv,
scoring = 'roc_auc',
error_score = 0)
svm_imbalanced = SVC()
svm_near_miss = SVC()
svm_tome_links = SVC()
svm_smote = SVC()
svm_hyper = [get_hyper_results(grid_search, X_train, y_train, svm_imbalanced),
get_hyper_results(grid_search, X_train_near_miss, y_train_near_miss, svm_near_miss),
get_hyper_results(grid_search, X_train_tome_links, y_train_tome_links, svm_tome_links),
get_hyper_results(grid_search, X_train_smote, y_train_smote, svm_smote)]
print('Area under ROC for imbalanced dataset: {}%'.format(svm_hyper[0]))
print('Area under ROC for near miss balanced dataset: {}%'.format(svm_hyper[1]))
print('Area under ROC for tome links balanced dataset: {}%'.format(svm_hyper[2]))
print('Area under ROC for SMOTE balanced dataset: {}%'.format(svm_hyper[3]))
###Output
Area under ROC for imbalanced dataset: 56.29%
Area under ROC for near miss balanced dataset: 71.43%
Area under ROC for tome links balanced dataset: 57.02%
Area under ROC for SMOTE balanced dataset: 57.58%
###Markdown
XBG Classification Algorithm
###Code
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.model_selection import GridSearchCV
# define model and parameters
model = XGBClassifier(learning_rate=0.02, n_estimators=600, objective='binary:logistic',
silent=True, nthread=1)
params = {"n_estimators": [10, 18, 22, 30,50, 60],
"max_depth": [3, 5, 10],
"min_samples_split": [15, 20],
"min_samples_leaf": [5, 10, 20],
"max_leaf_nodes": [20, 40],
"min_weight_fraction_leaf": [0.1]}
cv = RepeatedStratifiedKFold(n_splits = 10,
n_repeats = 3,
random_state = 1)
random_search = GridSearchCV(estimator = model,
param_grid = params,
scoring='roc_auc',
n_jobs = -1,
cv = cv )
grid_search = GridSearchCV(model,
param_grid = params,
scoring = 'roc_auc')
xgb_imbalanced = XGBClassifier()
xgb_near_miss = XGBClassifier()
xgb_tome_links = XGBClassifier()
xgb_smote = XGBClassifier()
xbg_hyper = [get_hyper_results(grid_search, X_train, y_train, xgb_imbalanced),
get_hyper_results(grid_search, X_train_near_miss, y_train_near_miss, xgb_near_miss),
get_hyper_results(grid_search, X_train_tome_links, y_train_tome_links, xgb_tome_links),
get_hyper_results(grid_search, X_train_smote, y_train_smote, xgb_smote)]
print('Area under ROC for imbalanced dataset: {}%'.format(xbg_hyper[0]))
print('Area under ROC for near miss balanced dataset: {}%'.format(xbg_hyper[1]))
print('Area under ROC for tome links balanced dataset: {}%'.format(xbg_hyper[2]))
print('Area under ROC for SMOTE balanced dataset: {}%'.format(xbg_hyper[3]))
###Output
Area under ROC for imbalanced dataset: 69.32%
Area under ROC for near miss balanced dataset: 76.86%
Area under ROC for tome links balanced dataset: 70.44%
Area under ROC for SMOTE balanced dataset: 84.87%
###Markdown
Random Forest Classification Algorithm
###Code
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.model_selection import GridSearchCV
svm_testing_hyperparameter = []
# define model and parameters
model = RandomForestClassifier()
param_grid2 = {"n_estimators": [10, 18, 22],
"max_depth": [3, 5, 10],
"min_samples_split": [15, 20],
"min_samples_leaf": [5, 10, 20],
"max_leaf_nodes": [20, 40],
"min_weight_fraction_leaf": [0.1]}
cv = RepeatedStratifiedKFold(n_splits = 10,
n_repeats = 3,
random_state = 1)
grid_search = GridSearchCV(model,
param_grid=param_grid2,
scoring = 'roc_auc',
cv = cv)
rand_forest_imbalanced = RandomForestClassifier()
rand_forest_near_miss = RandomForestClassifier()
rand_forest_tome_links = RandomForestClassifier()
rand_forest_smote = RandomForestClassifier()
rand_forest_hyper = [get_hyper_results(grid_search, X_train, y_train, rand_forest_imbalanced),
get_hyper_results(grid_search, X_train_near_miss, y_train_near_miss, rand_forest_near_miss),
get_hyper_results(grid_search, X_train_tome_links, y_train_tome_links, rand_forest_tome_links),
get_hyper_results(grid_search, X_train_smote, y_train_smote, rand_forest_smote)]
print('Area under ROC for imbalanced dataset: {}%'.format(rand_forest_hyper[0]))
print('Area under ROC for near miss balanced dataset: {}%'.format(rand_forest_hyper[1]))
print('Area under ROC for tome links balanced dataset: {}%'.format(rand_forest_hyper[2]))
print('Area under ROC for SMOTE balanced dataset: {}%'.format(rand_forest_hyper[3]))
hyper_results = pd.DataFrame()
hyper_results['Logistic Regression'] = pd.Series(data = logistic_hyper)
hyper_results['Support Vector Machine'] = pd.Series(data = svm_hyper)
hyper_results['XGB Classifer'] = pd.Series(data = xbg_hyper)
hyper_results['Random Forrest'] = pd.Series(data = rand_forest_hyper)
hyper_results.index = ['imbalanced_dataset', 'near_miss balanced dataset', 'tome_links balanced dataset', 'SMOTE']
hyper_results
###Output
_____no_output_____
###Markdown
* Based on the above results, the best score as per our selected metric is from XGB classifier from SMOTE balancing, in this scenario has highest probability that the model will predict the high risk or low risk applicant correctly.* Next best model is Random Forrest with SMOTE balancing dataset. VIII - Prediction
###Code
model = XGBClassifier(learning_rate=0.02, n_estimators=600, objective='binary:logistic',
silent=True, nthread=1)
params = {"n_estimators": [10, 18, 22, 30,50, 60],
"max_depth": [3, 5, 10],
"min_samples_split": [15, 20],
"min_samples_leaf": [5, 10, 20],
"max_leaf_nodes": [20, 40],
"min_weight_fraction_leaf": [0.1]}
cv = RepeatedStratifiedKFold(n_splits = 10,
n_repeats = 3,
random_state = 1)
grid_search = GridSearchCV(model,
param_grid = params,
scoring = 'roc_auc')
xgb_smote = XGBClassifier()
grid_result = grid_search.fit(X_train_smote, y_train_smote)
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
xgb_smote.fit(X_train_smote, y_train_smote)
def encode_gender(df):
mapping = {"male": 1, "female": 0}
return df.replace({'Gender': mapping})
def encode_maritial_Status(df):
mapping = {"single": 1, "divorced/seperated/married": 1, 'divorced/seperated': 0, 'married/widowed': 2}
return df.replace({'Marital_status': mapping})
def encode_housing(df):
mapping = {"own": 1, "for free": 0, 'rent': 2}
return df.replace({'Housing': mapping})
def encode_saving_account_balancd(df):
mapping = {"Low": 1, "High": 0, 'Very High': 3, 'Medium': 2}
return df.replace({'Savings_account_balance': mapping})
def encode_purpose(df):
mapping = {"Electronic equipment": 5,
"Education": 4,
'FF & E': 0,
'New Vehicle': 6,
'Used Vechicle': 8,
'Business': 1,
'Domestic Appliances': 3,
'Repair cost': 7,
'Career Development': 2}
return df.replace({'Purpose': mapping})
def encode_propoerty(df):
mapping = {"Real Estate": 2,
"Building society saving agreement / life insurance": 0,
'Car or other': 1}
return df.replace({'Property': mapping})
def encode_loan_history(df):
mapping = {"Critical/pending loans at other banks": 1,
"Existing loans paid back duly till now": 3,
'No loans taken/all loans paid back duly': 4,
'All loans at this bank paid back duly': 0,
'delay in paying off loans in the past': 2}
return df.replace({'Loan_history': mapping})
def encode_employment_status(df):
mapping = {"Skilled-employed/official": 1,
"Unskilled-resident": 3,
'Management/self-employed/highly qualified employee/ officer': 0,
'Unemployed/unskilled-non-resident': 2}
return df.replace({'Employment_status': mapping})
X_test_smote
y_test_smote
#@title Input fields
age = 50 #@param {type:"integer"}
gender = 'male' #@param ["male", "female"]
maritial_status = 'single' #@param ["single", "divorced_seperated_married", "divorced_seperated", "married_widowed"]
dependents = 2 #@param {type:"slider", min:0, max:10, step:1}
housing = 'own' #@param["own", "for free", 'rent']
years_at_current_resident = 10 #@param {type: "slider", min:0, max:50, step:1}
employment_status = 'Skilled-employed/official' #@param ["Skilled-employed/official","Unskilled-resident", "Management/self-employed/highly qualified employee/ officer", "Unemployed/unskilled-non-resident"]
foriegn_worker = '1' #@param ['0', '1']
savings_account_balance = 'Medium' #@param ["Low", "High", 'Very High', 'Medium']
months_loan_taken_for = 50 #@param{type: "slider", min:0, max:200, step:1}
purpose = 'Education' #@param ["Electronic equipment", "Education",'FF & E', 'New Vehicle', 'Used Vechicle', 'Business', 'Domestic Appliances', 'Repair cost', 'Career Development']
principal = 100000 #@param{type: "slider", min:10000, max:58424000, step: 10000}
emi_rate = 1 #@param{type: "slider", min:1, max:10, step:1}
property_1 = 'Real Estate' #@param ["Real Estate","Building society saving agreement / life insurance", 'Car or other']
co_applicant = '1' #@param ['0', '1']
guarantor = '1' #@param ['0', '1']
number_of_loans = 1 #@param{type: "slider", min:1, max:10, step:1}
loan_history = 'All loans at this bank paid back duly' #@param ["Critical/pending loans at other banks", "Existing loans paid back duly till now", 'No loans taken/all loans paid back duly', 'All loans at this bank paid back duly', 'delay in paying off loans in the past']
average_employment_years = 10 #@param{type: "slider", min:0, max:30, step:1}
def predict_risk(age, gender, maritial_status,dependents, housing, years_at_current_resident, employment_status, foriegn_worker, savings_account_balance, months_loan_taken_for, purpose, principal, emi_rate, property_1, co_applicant, guarantor, number_of_loans, loan_history, average_employment_years ):
df = pd.DataFrame.from_dict({'Primary_applicant_age_in_years': [age],
'Gender': [gender],
'Marital_status': [maritial_status],
'Number_of_dependents': [dependents],
'Housing': [housing],
'Years_at_current_residence': [years_at_current_resident],
'Employment_status': [employment_status],
'Foreign_worker': [foriegn_worker],
'Savings_account_balance': [savings_account_balance],
'Months_loan_taken_for': [months_loan_taken_for],
'Purpose': [purpose],
'Principal_loan_amount': [principal],
'EMI_rate_in_percentage_of_disposable_income': [emi_rate],
'Property': [property_1],
'Has_coapplicant': [co_applicant],
'Has_guarantor': [guarantor],
'Number_of_existing_loans_at_this_bank': [number_of_loans],
'Loan_history': [loan_history],
'Average_employment_years': [average_employment_years]})
df = encode_gender(df)
df = encode_maritial_Status(df)
df = encode_housing(df)
df = encode_saving_account_balancd(df)
df = encode_purpose(df)
df = encode_propoerty(df)
df = encode_loan_history(df)
df = encode_employment_status(df)
array_file = df.to_numpy(dtype = 'int32')
pred = xgb_smote.predict_proba(array_file)[0]
print('The probability of applicant being high risk applicant is {}% and probability of applicant being low risk applicant is {}%'.format((round(pred[1]*100,2)),
(round(pred[0]*100,2))))
predict_risk(age, gender, maritial_status, dependents, housing,years_at_current_resident, employment_status, foriegn_worker, savings_account_balance,
months_loan_taken_for, purpose, principal, emi_rate, property_1, co_applicant,
guarantor, number_of_loans, loan_history, average_employment_years)
###Output
The probability of applicant being high risk applicant is 33.97% and probability of applicant being low risk applicant is 66.03%
|
Lab_3_Using_Multiple_Numerical_Features_and_Feature_Scaling.ipynb | ###Markdown
Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Lab 3: Using Multiple Numerical Features and Feature Scaling **Learning Objectives:*** Train a model using more than one feature* Learn the importance of feature transformations* Introduce linear and log transformations of features. Standard Set-upWe begin with the standard set-up as seen in the last lab. We will again use the [Automobile Data Set](https://archive.ics.uci.edu/ml/datasets/automobile) and replace missing numerical values by the column mean.
###Code
import fnmatch
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from mpl_toolkits.mplot3d import Axes3D
from sklearn import metrics
import tensorflow as tf
from tensorflow.contrib.learn.python.learn import learn_io, estimator
# This line increases the amount of logging when there is an error. You can
# remove it if you want less logging.
tf.logging.set_verbosity(tf.logging.ERROR)
# Set the output display to have two digits for decimal places, for display
# readability only, and limit it to printing 15 rows.
pd.options.display.float_format = '{:.2f}'.format
pd.options.display.max_rows = 15
# Provide the names for the columns since the CSV file with the data does
# not have a header row.
cols = ['symboling', 'losses', 'make', 'fuel-type', 'aspiration', 'num-doors',
'body-style', 'drive-wheels', 'engine-location', 'wheel-base',
'length', 'width', 'height', 'weight', 'engine-type', 'num-cylinders',
'engine-size', 'fuel-system', 'bore', 'stroke', 'compression-ratio',
'horsepower', 'peak-rpm', 'city-mpg', 'highway-mpg', 'price']
# Load in the data from a CSV file that is comma seperated.
car_data = pd.read_csv('https://storage.googleapis.com/ml_universities/cars_dataset/cars_data.csv',
sep=',', names=cols, header=None, encoding='latin-1')
# We randomize the data, just to be sure not to get any pathological
# ordering effects that might harm the performance of stochastic gradient
# descent.
car_data = car_data.reindex(np.random.permutation(car_data.index))
# Coerce all missing entries to NaN, and then replace those by the column mean.
car_data['price'] = pd.to_numeric(car_data['price'], errors='coerce')
car_data['horsepower'] = pd.to_numeric(car_data['horsepower'], errors='coerce')
car_data['peak-rpm'] = pd.to_numeric(car_data['peak-rpm'], errors='coerce')
car_data['city-mpg'] = pd.to_numeric(car_data['city-mpg'], errors='coerce')
car_data['highway-mpg'] = pd.to_numeric(car_data['highway-mpg'], errors='coerce')
car_data.fillna(car_data.mean(), inplace=True)
car_data.describe()
###Output
_____no_output_____
###Markdown
Setting Up the Feature Columns and Input Function for TensorFlowIn order to train a model in TensorFlow, each feature that you want to use for training must be put into a feature column. We create a list of the categorical and numerical features that we will use for training our model. It's okay if one of these lists is empty. We also define `train_input_fn` to use the training data.
###Code
CATEGORICAL_COLUMNS = []
NUMERICAL_COLUMNS = ["price", "horsepower", "city-mpg", "highway-mpg",
"peak-rpm", "compression-ratio"]
def input_fn(dataframe):
"""Constructs a dictionary for the feature columns.
Args:
dataframe: The Pandas DataFrame to use for the input.
Returns:
The feature columns and the associated labels for the provided input.
"""
# Creates a dictionary mapping from each numeric feature column name (k) to
# the values of that column stored in a constant Tensor.
numerical_cols = {k: tf.constant(dataframe[k].values)
for k in NUMERICAL_COLUMNS}
# Creates a dictionary mapping from each categorical feature column name (k)
# to the values of that column stored in a tf.SparseTensor.
categorical_cols = {k: tf.SparseTensor(
indices=[[i, 0] for i in range(dataframe[k].size)],
values=dataframe[k].values,
dense_shape=[dataframe[k].size, 1])
for k in CATEGORICAL_COLUMNS}
# Merges the two dictionaries into one.
feature_cols = dict(numerical_cols.items() + categorical_cols.items())
# Converts the label column into a constant Tensor.
label = tf.constant(dataframe[LABEL].values)
# Returns the feature columns and the label.
return feature_cols, label
def train_input_fn():
"""Sets up the input function using the training data.
Returns:
The feature columns to use for training and the associated labels.
"""
return input_fn(training_examples)
###Output
_____no_output_____
###Markdown
Defining the features and linear regression modelWe define a function to construct the feature columns, and to define the TensorFlow linear regression model.
###Code
def construct_feature_columns():
"""Construct TensorFlow Feature Columns.
Returns:
A set of feature columns.
"""
feature_set = set([tf.contrib.layers.real_valued_column(feature)
for feature in NUMERICAL_FEATURES])
return feature_set
def define_linear_regression_model(learning_rate):
""" Defines a linear regression model of one feature to predict the target.
Args:
learning_rate: A `float`, the learning rate
Returns:
A linear regressor created with the given parameters
"""
linear_regressor = tf.contrib.learn.LinearRegressor(
feature_columns=construct_feature_columns(),
optimizer=tf.train.GradientDescentOptimizer(learning_rate=learning_rate),
gradient_clip_norm=5.0
)
return linear_regressor
###Output
_____no_output_____
###Markdown
Methods to visualize our resultsWe define functions to draw a scatter plot (with model names shown in a legend), create a calibration plot, and also to plot the learning curve.
###Code
def make_scatter_plot(dataframe, input_feature, target,
slopes=[], biases=[], model_names=[]):
""" Creates a scatter plot of input_feature vs target along with model names.
Args:
dataframe: the dataframe to visualize
input_feature: the input feature to be used for the x-axis
target: the target to be used for the y-axis
slopes: list of model weights (slope)
bias: list of model biases (same length as slopes)
model_names: list of model_names to use for legend (same length as slopes)
"""
# Define some colors to use that go from blue towards red
colors = [cm.coolwarm(x) for x in np.linspace(0, 1, len(slopes))]
# Generate the scatter plot
x = dataframe[input_feature]
y = dataframe[target]
plt.ylabel(target)
plt.xlabel(input_feature)
plt.scatter(x, y, color='black', label="")
# Add lines corresponding to the provided models
for i in range (0, len(slopes)):
y_0 = slopes[i] * x.min() + biases[i]
y_1 = slopes[i] * x.max() + biases[i]
plt.plot([x.min(), x.max()], [y_0, y_1],
label=model_names[i], color=colors[i])
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
def make_calibration_plot(predictions, targets):
""" Creates a calibration plot.
Args:
predictions: a list of values predicted by the model being visualized
targets: a list of the target values being predicted that must be the
same length as predictions.
"""
calibration_data = pd.DataFrame()
calibration_data["predictions"] = pd.Series(predictions)
calibration_data["targets"] = pd.Series(targets)
calibration_data.describe()
min_val = calibration_data["predictions"].min()
max_val = calibration_data["predictions"].max()
plt.ylabel("target")
plt.xlabel("prediction")
plt.scatter(predictions, targets, color='black')
plt.plot([min_val, max_val], [min_val, max_val])
def plot_learning_curve(training_losses):
""" Plot the learning curve.
Args:
training_loses: a list of losses to plot.
"""
plt.ylabel('Loss')
plt.xlabel('Training Steps')
plt.plot(training_losses)
###Output
_____no_output_____
###Markdown
Functions for training the modelWe use the same method as in the last lab to define the loss function (RMSE for linear regression) and to train the model.
###Code
def compute_loss(predictions, targets):
""" Computes the loss (RMSE) for linear regression.
Args:
predictions: a list of values predicted by the model.
targets: a list of the target values being predicted that must be the
same length as predictions.
Returns:
The RMSE for the provided predictions and targets.
"""
return math.sqrt(metrics.mean_squared_error(predictions, targets))
def train_model(linear_regressor, steps):
"""Trains a linear regression model.
Args:
linear_regressor: The regressor to train.
steps: A positive `int`, the total number of training steps.
Returns:
The trained regressor.
"""
# In order to see how the model evolves as we train it, we divide the
# steps into ten periods, and show the model after each period.
periods = 10
steps_per_period = steps / periods
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics. We store the loss, slope (feature weight), bias, and a name
# for the model when there is a single feature (which would then allow us
# to plot the model in a scatter plot).
print "Training model..."
training_losses = []
slopes = []
biases = []
model_names = []
for period in range (0, periods):
# Call fit to train the regressor for steps_per_period steps
linear_regressor.fit(input_fn=train_input_fn, steps=steps_per_period)
# Use the predict method to compute the predictions of the current model
predictions = np.array(list(linear_regressor.predict(
input_fn=train_input_fn)))
# Compute the loss between the predictions and correct labels, append
# the loss to the list of losses used to generate the learning curve after
# training is complete, and print the current loss
loss = compute_loss(predictions, training_examples[LABEL])
training_losses.append(loss)
print " Loss after period %02d : %0.3f" % (period, loss)
# When there is a single input feature, add slope, bias and model_name to
# the lists to be used later to plot the model.
if len(NUMERICAL_FEATURES) == 1 and len(CATEGORICAL_FEATURES) == 0:
feature_weight = fnmatch.filter(linear_regressor.get_variable_names(),
'linear/*/weight')
slopes.append(linear_regressor.get_variable_value(
feature_weight[0])[0])
biases.append(linear_regressor.get_variable_value(
'linear/bias_weight')[0])
model_names.append("period_" + str(period))
# Now that training is done print the final loss
print "Final Loss (RMSE) on the training data: %0.3f" % loss
# Generate a figure with the learning curve on the left and either a scatter
# plot or calibration plot (when more than 2 input features) on the right.
plt.figure(figsize=(10, 5))
plt.subplot(1, 2, 1)
plt.title("Learning Curve (RMSE vs time)")
plot_learning_curve(training_losses)
plt.subplot(1, 2, 2)
plt.tight_layout(pad=1.1, w_pad=3.0, h_pad=3.0)
if len(NUMERICAL_FEATURES) > 1 or len(CATEGORICAL_FEATURES) != 0:
plt.title("Calibration Plot")
make_calibration_plot(predictions, training_examples[LABEL])
else:
plt.title("Learned Model by Period on Scatter Plot")
make_scatter_plot(training_examples, NUMERICAL_FEATURES[0], LABEL,
slopes, biases, model_names)
return linear_regressor
###Output
_____no_output_____
###Markdown
Prepare FeaturesIn this lab you'll learn about the need to perform some feature transformation. You'll do this by modifyijng the processed features before returning them. So expect to modify this function later in this lab.
###Code
def prepare_features(dataframe):
"""Prepares the features for the provided dataset.
Args:
dataframe: A Pandas DataFrame that contains the data set.
Returns:
A new DataFrame that contains the features to be used to train the model.
"""
processed_features = dataframe.copy()
return processed_features
###Output
_____no_output_____
###Markdown
Generate the Training ExamplesWe simple call `prepare_features` on the `car_data` dataframe. We also include code to plot a histogram of `price`, `highway-mpg` and `city-mpg` to help understand the data we are using to train our model to predict `city-mpg`.
###Code
training_examples = prepare_features(car_data)
plt.figure(figsize=(20, 5))
plt.subplot(1, 3, 1)
plt.title("price")
histogram = car_data["price"].hist(bins=50)
plt.subplot(1, 3, 2)
plt.title("highway-mpg")
histogram = car_data["highway-mpg"].hist(bins=50)
plt.subplot(1, 3, 3)
plt.title("city-mpg")
histogram = car_data["city-mpg"].hist(bins=50)
###Output
_____no_output_____
###Markdown
Task 1: Train a Model Using Two Input Features (2 points)The focus on this lab is learning some of the issues that arise, and how to address them when you train a model with multiple features. The first task is to train a model to predict `city-mpg` from `highway-mpg` and `price` without using any feature processing. Remember what you learned in the last lab about how to find a good learning rate and numer of steps to train.
###Code
NUMERICAL_FEATURES = ["price", "highway-mpg"]
CATEGORICAL_FEATURES = []
LABEL = "city-mpg"
LEARNING_RATE = 1
STEPS = 50
linear_regressor = define_linear_regression_model(learning_rate = LEARNING_RATE)
linear_regressor = train_model(linear_regressor, steps=STEPS)
print "weight for price:", linear_regressor.get_variable_value(
"linear/price/weight")[0]
print "weight for highway-mpg:", linear_regressor.get_variable_value(
"linear/highway-mpg/weight")[0]
print "bias:", linear_regressor.get_variable_value("linear/bias_weight")
###Output
_____no_output_____
###Markdown
Think about these questions about what you found in training your model in Task 1* Look at the weight for the two variables. Do they match what you'd expect to see?* Given that `highway-mpg` is well correlated with `city-mpg`, what is it you see in the histograms that might explain why it was hard to train the model?* For linear regression it is important that all of the features are roughly in the same range so that a priori they are treated as equally important. How does the range of the price compare to the highway mpg, and what effect might this have when training the model? Task 2: Write a Linear Scaling Function (1 point)There are two characteristics we'd like of numerical features when used together to train a linear model* The range of the features is roughly the same* To the extent possible the histogram of the features kind of resembles a bell curve. Sometimes the data will fit this very well and other times it won't.As you've already seen in the code, you can take a Pandas column (e.g. `car_data['price']`) and find the min value with `car_data['price'].min()` and likewise find the max with `car_data['price'].max()`. Note that you can use a lambda function to apply `f(x)` to all entries `x` in a Pandas column `feature` using.``` feature.apply(lambda x: f(x))```To provide an example of feature transformation, we have provided an implementation for log scaling. Note that we take the log of x+1 for column value of x so that we are always taking the log of a number greater than 0 since log 0 is not defined. In this data all values are at least 0, so log(x+1) is well defined.You are to complete the implementation of `linear_scale`, in which you simply stretch/compress and shift the features linearly to fall into the interval [0,1]. The minimum value that occurs will map to 0, the maximum value that occurs will map to 1, (min + max)/2 will map to 0.5, and so on. You will need to make sure that your output from `linear_scale` is a real number (versus an integer). Be sure to test your function on some examples. For example if the input series originally had values going from 10 to 20, then after applying linear scale 10 should map to 0, 11 should map to 1, 12 should map to 2, ... and so on with 20 mapping to 1.
###Code
# Perform log scaling
def log_scale(series):
return series.apply(lambda x:math.log(x+1.0))
# Linearly rescales to the range [0, 1]
# You need to write this function. Right now it just returns the same series.
def linear_scale(series):
# add any additional lines of code needed
return series.apply(lambda x: x)
###Output
_____no_output_____
###Markdown
**Test your scaling procedure** with the following code block that applies these two scaling methods to `price` and `highway-mpg` and then draws a histogram for each.
###Code
def draw_histograms(feature_name):
plt.figure(figsize=(20, 4))
plt.subplot(1, 3, 1)
plt.title(feature_name)
histogram = car_data[feature_name].hist(bins=50)
plt.subplot(1, 3, 2)
plt.title("linear_scaling")
scaled_features = pd.DataFrame()
scaled_features[feature_name] = linear_scale(car_data[feature_name])
histogram = scaled_features[feature_name].hist(bins=50)
plt.subplot(1, 3, 3)
plt.title("log scaling")
log_normalized_features = pd.DataFrame()
log_normalized_features[feature_name] = log_scale(car_data[feature_name])
histogram = log_normalized_features[feature_name].hist(bins=50)
draw_histograms('price')
draw_histograms("highway-mpg")
###Output
_____no_output_____
###Markdown
Task 3 - Training the Model Using the Transformed Features (2 points)Modify `prepare_features` to apply linear scaling to `price` and `highway-mpg` and then train the best model you can. **Do not modify the target feature so that the RMSE can be compared to the model you trained in Task 2 and also you want your predictions to be in the correct range**.NOTE: It is possible that if your learning rate is too high you will converge to a solution that is not optimal since you are overshotting and then undershooting the best feature weights as you get close to the optimal solution. So when looking at the scatter plot, if you converge to a model that is not good, try a slightly smaller learning rate.
###Code
def prepare_features(dataframe):
"""Prepares the features for provided dataset.
Args:
dataframe: A Pandas DataFrame expected to contain data from the
desired data set.
Returns:
A new dataFrame that contains the features to be used for the model.
"""
processed_features = dataframe.copy()
# Apply linear scaling to price and highway-mpg here
return processed_features
training_examples = prepare_features(car_data)
histogram = training_examples["highway-mpg"].hist(bins=50)
histogram = training_examples["price"].hist(bins=50)
NUMERICAL_FEATURES = ["price", "highway-mpg"]
CATEGORICAL_FEATURES = []
LABEL = "city-mpg"
LEARNING_RATE = 1
STEPS = 50
linear_regressor = define_linear_regression_model(learning_rate = LEARNING_RATE)
linear_regressor = train_model(linear_regressor, steps=STEPS)
# Let's also look at the weights and bias
print linear_regressor.get_variable_names()
print "weight for price:", linear_regressor.get_variable_value(
"linear/price/weight")[0]
print "weight for highway-mpg:", linear_regressor.get_variable_value(
"linear/highway-mpg/weight")[0]
print "bias:", linear_regressor.get_variable_value("linear/bias_weight")
###Output
_____no_output_____ |
tutorials/notebook/cx_site_chart_examples/ridgeline_7.ipynb | ###Markdown
Example: CanvasXpress ridgeline Chart No. 7This example page demonstrates how to, using the Python package, create a chart that matches the CanvasXpress online example located at:https://www.canvasxpress.org/examples/ridgeline-7.htmlThis example is generated using the reproducible JSON obtained from the above page and the `canvasxpress.util.generator.generate_canvasxpress_code_from_json_file()` function.Everything required for the chart to render is included in the code below. Simply run the code block.
###Code
from canvasxpress.canvas import CanvasXpress
from canvasxpress.js.collection import CXEvents
from canvasxpress.render.jupyter import CXNoteBook
cx = CanvasXpress(
render_to="ridgeline7",
data={
"y": {
"vars": [
"0",
"1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9",
"10",
"11",
"12",
"13",
"14",
"15",
"16",
"17",
"18",
"19",
"20",
"21",
"22",
"23",
"24",
"25",
"26",
"27",
"28",
"29",
"30",
"31",
"32",
"33",
"34",
"35",
"36",
"37",
"38",
"39",
"40",
"41",
"42",
"43"
],
"smps": [
"x",
"y"
],
"data": [
[
10,
8.04
],
[
8,
6.95
],
[
13,
7.58
],
[
9,
8.81
],
[
11,
8.33
],
[
14,
9.96
],
[
6,
7.24
],
[
4,
4.26
],
[
12,
10.84
],
[
7,
4.82
],
[
5,
5.68
],
[
10,
9.14
],
[
8,
8.14
],
[
13,
8.74
],
[
9,
8.77
],
[
11,
9.26
],
[
14,
8.1
],
[
6,
6.13
],
[
4,
3.1
],
[
12,
9.13
],
[
7,
7.26
],
[
5,
4.74
],
[
10,
7.46
],
[
8,
6.77
],
[
13,
12.74
],
[
9,
7.11
],
[
11,
7.81
],
[
14,
8.84
],
[
6,
6.08
],
[
4,
5.39
],
[
12,
8.15
],
[
7,
6.42
],
[
5,
5.73
],
[
8,
6.58
],
[
8,
5.76
],
[
8,
7.71
],
[
8,
8.84
],
[
8,
8.47
],
[
8,
7.04
],
[
8,
5.25
],
[
19,
12.5
],
[
8,
5.56
],
[
8,
7.91
],
[
8,
6.89
]
]
},
"z": {
"dataset": [
"I",
"I",
"I",
"I",
"I",
"I",
"I",
"I",
"I",
"I",
"I",
"II",
"II",
"II",
"II",
"II",
"II",
"II",
"II",
"II",
"II",
"II",
"III",
"III",
"III",
"III",
"III",
"III",
"III",
"III",
"III",
"III",
"III",
"IV",
"IV",
"IV",
"IV",
"IV",
"IV",
"IV",
"IV",
"IV",
"IV",
"IV"
]
}
},
config={
"graphType": "Scatter2D",
"hideHistogram": True,
"ridgeBy": "dataset",
"showFilledHistogramDensity": True,
"showHistogramDensity": True
},
width=613,
height=613,
events=CXEvents(),
after_render=[
[
"createHistogram",
[
"dataset",
None,
None
]
]
],
other_init_params={
"version": 35,
"events": False,
"info": False,
"afterRenderInit": False,
"noValidate": True
}
)
display = CXNoteBook(cx)
display.render(output_file="ridgeline_7.html")
###Output
_____no_output_____ |
Assignments&Projects/Linear Regression Model/Notebooks/Cleaning_data.ipynb | ###Markdown
This notebook will be used to cleaning and tidy up the dataset Import Libaries for cleaning the dataset.
###Code
import pandas as pd
import numpy as np
# reading the dataset csv file using one of pandas data reads to store the set in a reusable variable.
# Along with displaying a quick look into what the data appears to be with the .head() function.
ABB = pd.read_csv("D:\AB_NYC_2019.csv")
ABB.head()
# Using pandas .info() function to show the size, characteristics, and fields within the dataset I am using today.
ABB.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 48895 entries, 0 to 48894
Data columns (total 16 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 48895 non-null int64
1 name 48879 non-null object
2 host_id 48895 non-null int64
3 host_name 48874 non-null object
4 neighbourhood_group 48895 non-null object
5 neighbourhood 48895 non-null object
6 latitude 48895 non-null float64
7 longitude 48895 non-null float64
8 room_type 48895 non-null object
9 price 48895 non-null int64
10 minimum_nights 48895 non-null int64
11 number_of_reviews 48895 non-null int64
12 last_review 38843 non-null object
13 reviews_per_month 38843 non-null float64
14 calculated_host_listings_count 48895 non-null int64
15 availability_365 48895 non-null int64
dtypes: float64(3), int64(7), object(6)
memory usage: 4.8+ MB
###Markdown
As you can see above the Airbnb contains exactually 48895 rows and 16 columns of these rows and columns you will see that there are 3 float fields, 7 integer fields, and 6 object/string fields. You can see here that the dataset ranges from name and locations of the Airbnb rentals to the rooming, pricing, and availability.
###Code
# code below shows me checking my dataset to see which columns contains nan values and to see how much each column reports.
ABB.isnull().sum()
# Printing out the total number of nan values found in the dataset.
print('For the entire dataset we have found that there were 20141 nan values in the entire set.')
# Using the dropna() function to remove the data that contains nan values in the rows of the dataset.
ABB.dropna(inplace=True)
print('Here is the total number of nan values after running the dropna() function',ABB.isnull().sum().sum(),'nan values were found .')
###Output
For the entire dataset we have found that there were 20141 nan values in the entire set.
Here is the total number of nan values after running the dropna() function 0 nan values were found .
###Markdown
After searching the dataset for nan values it was found that there were ~20K records that either contains some or all nan values. Later on in the modeling notebook I will use a clean dataset verse a cleaned dataset to compare the accuracy of the individual regression scores.
###Code
ABB.info()
# Saving only the fields that I will be using to report on my model for future regression and analyzing
Cleaned_ABB = ABB[['neighbourhood_group','room_type','price','minimum_nights','number_of_reviews','availability_365']]
Cleaned_ABB.info()
Cleaned_ABB.head()
###Output
_____no_output_____
###Markdown
Removed all uncessary fields as I am going to create the model based on the New York districts and room types that are available to rent.
###Code
# Exported the newly cleaned dataset and will use this data set for reporting and model creation
ABB.to_csv (r'D:\Cleaned_data.csv')
###Output
_____no_output_____ |
hyperstyle.ipynb | ###Markdown

###Code
#@title **1.セットアップ**
import os
os.chdir('/content')
CODE_DIR = 'hyperstyle'
# clone repo
!git clone https://github.com/sugi-san/hyperstyle.git $CODE_DIR
# install ninja
!wget https://github.com/ninja-build/ninja/releases/download/v1.8.2/ninja-linux.zip
!sudo unzip ninja-linux.zip -d /usr/local/bin/
!sudo update-alternatives --install /usr/bin/ninja ninja /usr/local/bin/ninja 1 --force
os.chdir(f'./{CODE_DIR}')
# Import Packages
import time
import sys
import pprint
from tqdm import tqdm
import numpy as np
from PIL import Image
import torch
import torchvision.transforms as transforms
import imageio
from IPython.display import HTML
from base64 import b64encode
sys.path.append(".")
sys.path.append("..")
from notebooks.notebook_utils import Downloader, HYPERSTYLE_PATHS, W_ENCODERS_PATHS, run_alignment
from utils.common import tensor2im
from utils.inference_utils import run_inversion
from utils.domain_adaptation_utils import run_domain_adaptation
from utils.model_utils import load_model, load_generator
from function import *
%load_ext autoreload
%autoreload 2
# download pretrained_models
! pip install --upgrade gdown
import gdown
import time
for i in range(10):
if os.path.exists('pretrained_models.zip'):
break
else:
gdown.download('https://drive.google.com/uc?id=1NxGZfkE3THgEfPHbUoLPjCKfpWTo08V1', 'pretrained_models.zip', quiet=False)
time.sleep(1)
! unzip pretrained_models.zip
# set expeiment data
EXPERIMENT_DATA_ARGS = {
"faces": {
"model_path": "./pretrained_models/hyperstyle_ffhq.pt",
"w_encoder_path": "./pretrained_models/faces_w_encoder.pt",
"image_path": "./notebooks/images/face_image.jpg",
"transform": transforms.Compose([
transforms.Resize((256, 256)),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
},
"cars": {
"model_path": "./pretrained_models/hyperstyle_cars.pt",
"w_encoder_path": "./pretrained_models/cars_w_encoder.pt",
"image_path": "./notebooks/images/car_image.jpg",
"transform": transforms.Compose([
transforms.Resize((192, 256)),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
},
"afhq_wild": {
"model_path": "./pretrained_models/hyperstyle_afhq_wild.pt",
"w_encoder_path": "./pretrained_models/afhq_wild_w_encoder.pt",
"image_path": "./notebooks/images/afhq_wild_image.jpg",
"transform": transforms.Compose([
transforms.Resize((256, 256)),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
}
}
experiment_type = 'faces'
EXPERIMENT_ARGS = EXPERIMENT_DATA_ARGS[experiment_type]
# Load HyperStyle Model
model_path = EXPERIMENT_ARGS['model_path']
net, opts = load_model(model_path, update_opts={"w_encoder_checkpoint_path": EXPERIMENT_ARGS['w_encoder_path']})
print('Model successfully loaded!')
# difine function
def generate_mp4(out_name, images, kwargs):
writer = imageio.get_writer(out_name + '.mp4', **kwargs)
for image in tqdm(images):
writer.append_data(image)
writer.close()
def get_latent_and_weight_deltas(inputs, net, opts):
opts.resize_outputs = False
opts.n_iters_per_batch = 5
with torch.no_grad():
_, latent, weights_deltas, _ = run_inversion(inputs.to("cuda").float(), net, opts)
weights_deltas = [w[0] if w is not None else None for w in weights_deltas]
return latent, weights_deltas
def get_result_from_vecs(vectors_a, vectors_b, weights_deltas_a, weights_deltas_b, alpha):
results = []
for i in range(len(vectors_a)):
with torch.no_grad():
cur_vec = vectors_b[i] * alpha + vectors_a[i] * (1 - alpha)
cur_weight_deltas = interpolate_weight_deltas(weights_deltas_a, weights_deltas_b, alpha)
res = net.decoder([cur_vec],
weights_deltas=cur_weight_deltas,
randomize_noise=False,
input_is_latent=True)[0]
results.append(res[0])
return results
def interpolate_weight_deltas(weights_deltas_a, weights_deltas_b, alpha):
cur_weight_deltas = []
for weight_idx, w in enumerate(weights_deltas_a):
if w is not None:
delta = weights_deltas_b[weight_idx] * alpha + weights_deltas_a[weight_idx] * (1 - alpha)
else:
delta = None
cur_weight_deltas.append(delta)
return cur_weight_deltas
def show_mp4(filename, width):
mp4 = open(filename + '.mp4', 'rb').read()
data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
display(HTML("""
<video width="%d" controls autoplay loop>
<source src="%s" type="video/mp4">
</video>
""" % (width, data_url)))
# downloadフォルダ作成
import os
os.makedirs('download', exist_ok=True)
###Output
_____no_output_____
###Markdown

###Code
#@title **2.写真の表示**
display_pic('./images/pic')
#@title **3.顔の切り出し**
# --- align処理 ---
import glob
from tqdm import tqdm
reset_folder('./images/align')
files = sorted(glob.glob('./images/pic/*.jpg'))
for file in tqdm(files):
aligned_image = run_alignment(file)
name = os.path.basename(file)
aligned_image.save('./images/align/'+name)
# --- 反転処理 ---
import glob
from tqdm import tqdm
image_paths = sorted(glob.glob('./images/align/*.jpg'))
in_images = []
all_vecs = []
all_weights_deltas = []
img_transforms = EXPERIMENT_ARGS['transform']
if experiment_type == "cars":
resize_amount = (512, 384)
else:
resize_amount = (opts.output_size, opts.output_size)
for image_path in tqdm(image_paths):
#print(f'Working on {os.path.basename(image_path)}...')
original_image = Image.open(image_path)
original_image = original_image.convert("RGB")
input_image = img_transforms(original_image)
# get the weight deltas for each image
result_vec, weights_deltas = get_latent_and_weight_deltas(input_image.unsqueeze(0), net, opts)
all_vecs.append([result_vec])
all_weights_deltas.append(weights_deltas)
in_images.append(original_image.resize(resize_amount))
display_pic('images/align')
#@title **4.動画の作成**
# インデックス指定
input = '04.jpg'#@param {type:"string"}
names = [os.path.basename(x) for x in image_paths]
pic_idx = names.index(input)
# 編集係数
#@markdown ・年齢上限をmaxで、年齢下限をminで設定する(標準はmax=50, min=-50)
max = 50 #@param {type:"slider", min:40, max:70, step:5}
min = -50 #@param {type:"slider", min:-70, max:-40, step:5}
# 編集用ベクトル作成
age = torch.load('editing/interfacegan_directions/age.pt').to('cuda')
age = torch.reshape(age,(1, 1, 512))
pose = torch.load('editing/interfacegan_directions/pose.pt').to('cuda')
pose = torch.reshape(pose,(1, 18, 512))
w = pose*0.8+age
# 画像生成関数
def result_img(cur_vec, cur_weight_deltas):
res = net.decoder([cur_vec],
weights_deltas=cur_weight_deltas,
randomize_noise=False,
input_is_latent=True)[0]
output_im = tensor2im(res[0])
return output_im
# フレーム作成
reset_folder('im')
from tqdm import tqdm
num = 0
for i in tqdm(range(0, min, -1)):
cur_vec = all_vecs[pic_idx][0]+w*i/10
cur_weight_deltas = all_weights_deltas[pic_idx]
output_im = result_img(cur_vec, cur_weight_deltas)
output_im.save('im/'+str(num).zfill(4)+'.jpg')
num +=1
for j in tqdm(range(min, max)):
cur_vec = all_vecs[pic_idx][0]+w*j/10
cur_weight_deltas = all_weights_deltas[pic_idx]
output_im = result_img(cur_vec, cur_weight_deltas)
output_im.save('im/'+str(num).zfill(4)+'.jpg')
num +=1
for k in tqdm(range(max, 0, -1)):
cur_vec = all_vecs[pic_idx][0]+w*k/10
cur_weight_deltas = all_weights_deltas[pic_idx]
output_im = result_img(cur_vec, cur_weight_deltas)
output_im.save('im/'+str(num).zfill(4)+'.jpg')
num +=1
# 連結フレームの作成
import cv2
import glob
reset_folder('im2')
img1 = cv2.imread('images/align/'+input)
files = sorted(glob.glob('im/*.jpg'))
for i, file in enumerate(tqdm(files)):
img2 = cv2.imread(file)
img3 = cv2.hconcat([img1, img2])
cv2.imwrite('im2/'+str(i).zfill(4)+'.jpg', img3)
# 動画作成&再生
print('making movie...')
! ffmpeg -y -r 20 -i im/%04d.jpg -vcodec libx264 -pix_fmt yuv420p -loglevel error output.mp4
! ffmpeg -y -r 20 -i im2/%04d.jpg -vcodec libx264 -pix_fmt yuv420p -loglevel error output2.mp4
show_mp4('output2', 600)
#@title **5.動画のダウンロード**
#@markdown ・右の動画のみダウンロードする場合は、onlyにチェックを入れて下さい
import shutil
only = False #@param {type:"boolean"}
if only == True:
download_name = 'download/'+os.path.splitext(input)[0]+'_o.mp4'
shutil.copy('output.mp4', download_name)
else:
download_name = 'download/'+os.path.splitext(input)[0]+'.mp4'
shutil.copy('output2.mp4', download_name)
from google.colab import files
files.download(download_name)
###Output
_____no_output_____
###Markdown

###Code
#@title **6.写真のアップロード**
#@markdown ・1人の顔だけが写っている写真を使って下さい
# ルートへ画像をアップロード
from google.colab import files
reset_folder('pic')
uploaded = files.upload()
uploaded = list(uploaded.keys())
# ルートから指定フォルダーへ移動
for file in uploaded:
shutil.move(file, 'pic')
display_pic('pic')
###Output
_____no_output_____
###Markdown
###Code
#@title セットアップ
import os
os.chdir('/content')
CODE_DIR = 'hyperstyle'
# clone repo
!git clone https://github.com/cedro3/hyperstyle.git $CODE_DIR
# install ninja
!wget https://github.com/ninja-build/ninja/releases/download/v1.8.2/ninja-linux.zip
!sudo unzip ninja-linux.zip -d /usr/local/bin/
!sudo update-alternatives --install /usr/bin/ninja ninja /usr/local/bin/ninja 1 --force
os.chdir(f'./{CODE_DIR}')
# Import Packages
import time
import sys
import pprint
from tqdm import tqdm
import numpy as np
from PIL import Image
import torch
import torchvision.transforms as transforms
import imageio
from IPython.display import HTML
from base64 import b64encode
sys.path.append(".")
sys.path.append("..")
from notebooks.notebook_utils import Downloader, HYPERSTYLE_PATHS, W_ENCODERS_PATHS, run_alignment
from utils.common import tensor2im
from utils.inference_utils import run_inversion
from utils.domain_adaptation_utils import run_domain_adaptation
from utils.model_utils import load_model, load_generator
from function import *
%load_ext autoreload
%autoreload 2
# download pretrained_models
! pip install --upgrade gdown
import gdown
gdown.download('https://drive.google.com/uc?id=1NxGZfkE3THgEfPHbUoLPjCKfpWTo08V1', 'pretrained_models.zip', quiet=False)
! unzip pretrained_models.zip
# set expeiment data
EXPERIMENT_DATA_ARGS = {
"faces": {
"model_path": "./pretrained_models/hyperstyle_ffhq.pt",
"w_encoder_path": "./pretrained_models/faces_w_encoder.pt",
"image_path": "./notebooks/images/face_image.jpg",
"transform": transforms.Compose([
transforms.Resize((256, 256)),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
},
"cars": {
"model_path": "./pretrained_models/hyperstyle_cars.pt",
"w_encoder_path": "./pretrained_models/cars_w_encoder.pt",
"image_path": "./notebooks/images/car_image.jpg",
"transform": transforms.Compose([
transforms.Resize((192, 256)),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
},
"afhq_wild": {
"model_path": "./pretrained_models/hyperstyle_afhq_wild.pt",
"w_encoder_path": "./pretrained_models/afhq_wild_w_encoder.pt",
"image_path": "./notebooks/images/afhq_wild_image.jpg",
"transform": transforms.Compose([
transforms.Resize((256, 256)),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
}
}
experiment_type = 'faces'
EXPERIMENT_ARGS = EXPERIMENT_DATA_ARGS[experiment_type]
# Load HyperStyle Model
model_path = EXPERIMENT_ARGS['model_path']
net, opts = load_model(model_path, update_opts={"w_encoder_checkpoint_path": EXPERIMENT_ARGS['w_encoder_path']})
print('Model successfully loaded!')
# difine function
def generate_mp4(out_name, images, kwargs):
writer = imageio.get_writer(out_name + '.mp4', **kwargs)
for image in tqdm(images):
writer.append_data(image)
writer.close()
def get_latent_and_weight_deltas(inputs, net, opts):
opts.resize_outputs = False
opts.n_iters_per_batch = 5
with torch.no_grad():
_, latent, weights_deltas, _ = run_inversion(inputs.to("cuda").float(), net, opts)
weights_deltas = [w[0] if w is not None else None for w in weights_deltas]
return latent, weights_deltas
def get_result_from_vecs(vectors_a, vectors_b, weights_deltas_a, weights_deltas_b, alpha):
results = []
for i in range(len(vectors_a)):
with torch.no_grad():
cur_vec = vectors_b[i] * alpha + vectors_a[i] * (1 - alpha)
cur_weight_deltas = interpolate_weight_deltas(weights_deltas_a, weights_deltas_b, alpha)
res = net.decoder([cur_vec],
weights_deltas=cur_weight_deltas,
randomize_noise=False,
input_is_latent=True)[0]
results.append(res[0])
return results
def interpolate_weight_deltas(weights_deltas_a, weights_deltas_b, alpha):
cur_weight_deltas = []
for weight_idx, w in enumerate(weights_deltas_a):
if w is not None:
delta = weights_deltas_b[weight_idx] * alpha + weights_deltas_a[weight_idx] * (1 - alpha)
else:
delta = None
cur_weight_deltas.append(delta)
return cur_weight_deltas
def show_mp4(filename, width):
mp4 = open(filename + '.mp4', 'rb').read()
data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
display(HTML("""
<video width="%d" controls autoplay loop>
<source src="%s" type="video/mp4">
</video>
""" % (width, data_url)))
#@title サンプル画像表示
display_pic('./images/pic')
#@title align処理
import glob
from tqdm import tqdm
reset_folder('./images/align')
files = sorted(glob.glob('./images/pic/*.jpg'))
for file in tqdm(files):
aligned_image = run_alignment(file)
name = os.path.basename(file)
aligned_image.save('./images/align/'+name)
display_pic('./images/align')
#@title 反転の実行
import glob
image_paths = sorted(glob.glob('./images/align/*.jpg'))
in_images = []
all_vecs = []
all_weights_deltas = []
img_transforms = EXPERIMENT_ARGS['transform']
if experiment_type == "cars":
resize_amount = (512, 384)
else:
resize_amount = (opts.output_size, opts.output_size)
for image_path in image_paths:
#print(f'Working on {os.path.basename(image_path)}...')
original_image = Image.open(image_path)
original_image = original_image.convert("RGB")
input_image = img_transforms(original_image)
# get the weight deltas for each image
result_vec, weights_deltas = get_latent_and_weight_deltas(input_image.unsqueeze(0), net, opts)
all_vecs.append([result_vec])
all_weights_deltas.append(weights_deltas)
in_images.append(original_image.resize(resize_amount))
n_transition = 25
if experiment_type == "cars":
SIZE = 384
else:
SIZE = opts.output_size
images = []
image_paths.append(image_paths[0])
all_vecs.append(all_vecs[0])
all_weights_deltas.append(all_weights_deltas[0])
in_images.append(in_images[0])
for i in tqdm(range(1, len(image_paths))):
if i == 0:
alpha_vals = [0] * 10 + np.linspace(0, 1, n_transition).tolist() + [1] * 5
else:
alpha_vals = [0] * 5 + np.linspace(0, 1, n_transition).tolist() + [1] * 5
for alpha in alpha_vals:
image_a = np.array(in_images[i - 1])
image_b = np.array(in_images[i])
image_joint = np.zeros_like(image_a)
up_to_row = int((SIZE - 1) * alpha)
if up_to_row > 0:
image_joint[:(up_to_row + 1), :, :] = image_b[((SIZE - 1) - up_to_row):, :, :]
if up_to_row < (SIZE - 1):
image_joint[up_to_row:, :, :] = image_a[:(SIZE - up_to_row), :, :]
result_image = get_result_from_vecs(all_vecs[i - 1], all_vecs[i],
all_weights_deltas[i - 1], all_weights_deltas[i],
alpha)[0]
if experiment_type == "cars":
result_image = result_image[:, 64:448, :]
output_im = tensor2im(result_image)
res = np.concatenate([image_joint, np.array(output_im)], axis=1)
images.append(res)
#@title 動画の作成
kwargs = {'fps': 15}
save_path = "./notebooks/animations"
os.makedirs(save_path, exist_ok=True)
gif_path = os.path.join(save_path, f"{experiment_type}_gif")
generate_mp4(gif_path, images, kwargs)
show_mp4(gif_path, width=opts.output_size)
###Output
_____no_output_____ |
data-crunch-competition.ipynb | ###Markdown
QuickStartBasic step and workflow:0 - Using this notebook1 - Download data2 - Explore data3 - Choose and train a model4 - Scoring5 - Make prediction6 - Submit--- 0 - Using this notebook To execute the cell press `shift+enter`. Follow the steps and login with your Google account.
###Code
import tensorflow as tf
tf.test.gpu_device_name()
# Lib & Dependencies
import pandas as pd
import numpy as np
import xgboost as xgb
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
import requests
from scipy import stats
import seaborn as sns
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
1 - Download dataWe will provide you with two dataset- Training_data will be use to train your model.- Hackathon_data will be use to make your prediciton.There is three target you need to provide prediction on: target_r, target_g, target_b.
###Code
# Data Download (may take a few minutes depending on your network)
train_datalink_X = 'https://tournament.datacrunch.com/data/X_train.csv'
train_datalink_y = 'https://tournament.datacrunch.com/data/y_train.csv'
hackathon_data_link = 'https://tournament.datacrunch.com/data/X_test.csv'
# Data for training
train_data = pd.read_csv(train_datalink_X)
# Data for which you will submit your prediction
test_data = pd.read_csv(hackathon_data_link)
# Targets to be predicted
train_targets = pd.read_csv(train_datalink_y)
# If you don't want to look at the problem as a time serie
train_data.drop(['id' ], axis=1, inplace=True)
test_data.drop(['id'], axis=1, inplace=True)
display(train_data)
display(train_targets)
display(test_data)
###Output
_____no_output_____
###Markdown
2 - Explore DataData processing is one of the most important part. Observe your data and prepare carefully what you will give to your model for training.
###Code
display(train_data.describe())
display(train_targets.describe())
train_wd_targets = pd.concat([train_data, train_targets], axis=1)
train_corr = train_wd_targets.corr()
test_corr = test_data.corr()
plt.figure(figsize=(18, 8))
sns.heatmap(train_corr, annot=True, fmt='.2f', cmap='coolwarm', linewidths=0.4)
plt.figure(figsize=(18, 8))
sns.heatmap(test_corr, annot=True, fmt='.2f', cmap='coolwarm', linewidths=0.4)
print(train_data.shape, test_data.shape)
df_combind = pd.concat([train_data, test_data], axis=0)
df_combind.shape
combind_corr = df_combind.corr()
plt.figure(figsize=(18, 8))
sns.heatmap(combind_corr, annot=True, fmt='.2f', cmap='coolwarm', linewidths=0.4)
sns.countplot(y = train_data.Moons)
def extract_stat_features(df, grouper):
feat_df = df.groupby([grouper]).agg([np.mean, np.std, np.min, np.max]).reset_index()
feat_df.columns = feat_df.columns.map('_'.join).str.strip()
temp_df = pd.DataFrame(feat_df.nunique(), columns=['values'])
ll = temp_df[temp_df.values<=2].index.tolist()
feat_df.drop(ll, axis=1, inplace=True)
return feat_df
feat_df = extract_stat_features(train_data, grouper='Moons')
feat_df
###Output
_____no_output_____
###Markdown
3 - Choose modelsCrunch with originality!!! 👨🏻🏭
###Code
train_data = train_data.merge(feat_df, how = 'left', left_on = ['Moons'], right_on=['Moons_'])
from sklearn.model_selection import RandomizedSearchCV, RepeatedStratifiedKFold, ShuffleSplit, learning_curve, RepeatedKFold
import sklearn
sklearn.metrics.SCORERS.keys()
estimator = xgb.XGBRegressor(objective='reg:squarederror', random_state=42)
parameters = {
'learning_rate': [0.3, 0.1, 0.01, 0.05],
'max_depth': [3, 5, 7, 10],
'min_child_weight': [1, 3, 5],
'subsample': [0.5, 0.7],
'colsample_bytree': [0.5, 0.7],
'objective': ['reg:squarederror', 'reg:linear', 'reg:gbtree'],
'n_estimators': range(50, 1000, 50),
}
cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=42)
rand_search = RandomizedSearchCV(
estimator=estimator,
param_distributions=parameters,
scoring = 'r2',
n_jobs = -1,
cv = cv,
verbose=True, random_state=42
)
def hyperparameter_opt(data, target):
X, y = data, target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=False)
estimator = xgb.XGBRegressor(random_state=42)
rand_search = RandomizedSearchCV(
estimator=estimator,
param_distributions=parameters,
scoring = 'r2',
n_jobs = -1,
cv = cv,
verbose=True, random_state=42)
rand_search_result = rand_search.fit(X_train, y_train)
print(rand_search_result.best_params_)
return rand_search_result
train_data.head()
target_r_params = hyperparameter_opt(train_data.drop(['Moons'], axis=1), train_targets.target_r)
target_g_params = hyperparameter_opt(train_data.drop(['Moons'], axis=1), train_targets.target_g)
target_b_params = hyperparameter_opt(train_data.drop(['Moons'], axis=1), train_targets.target_b)
###Output
Fitting 30 folds for each of 10 candidates, totalling 300 fits
###Markdown
4 - Modelling
###Code
def xg_boost_hackathon(data, target):
X, y = data, target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=True)
model = xgb.XGBRegressor(objective='reg:squarederror', subsample = 0.5, n_estimators = 50,
min_child_weight = 3, max_depth = 3, learning_rate = 0.01, colsample_bytree = 0.7)
model.fit(X_train, y_train,
eval_set = [(X_train, y_train),
(X_test, y_test)],
eval_metric='rmse',
early_stopping_rounds=5)
pred = model.predict(X_test)
scorer(y_test, pred)
return model
def scorer(y_test, y_pred):
score = (stats.spearmanr(y_test, y_pred)*100)[0]
print('Score as calculated for the leader board (っಠ‿ಠ)っ {}'.format(score))
###Output
_____no_output_____
###Markdown
Train your Models on targetsYou can submit continious target if you want
###Code
# Making prediction for target r
model_target_r = xg_boost_hackathon(train_data.drop(['Moons', 'Moons_'], axis=1), train_targets.target_r)
# Making prediction for target g
model_target_g = xg_boost_hackathon(train_data.drop(['Moons', 'Moons_'], axis=1), train_targets.target_g)
# Making prediction for target b
model_target_b = xg_boost_hackathon(train_data.drop(['Moons', 'Moons_'], axis=1), train_targets.target_b)
results = model_target_b.evals_result()
epochs = len(results['validation_0']['rmse'])
x_axis = range(0, epochs)
fig, ax = plt.subplots()
ax.plot(x_axis, results['validation_0']['rmse'], label='Train')
ax.plot(x_axis, results['validation_1']['rmse'], label='Test')
ax.legend()
plt.ylabel('AUC')
plt.title('XGBoost RMSE')
plt.show()
xgb.plot_importance(model_target_r, max_num_features=15)
###Output
_____no_output_____
###Markdown
5 - Make prediction on the 3 targetsWhen you feel like your model is accurate enough it's time to predict the target and submit your results.Repeat the operation on the three targets, concatenate the answers and submit.**WARNING** 1/ Keep the raw order identical.**WARNING** 2/ Be sure that your columns are named target_r, target-g and target_b.**WARNING** 3/ Your prediction need to be between 0 and 1.**WARNING** 4/ Don't submit constant values.
###Code
test_feat = extract_stat_features(test_data, 'Moons')
test_data = test_data.merge(test_feat, how = 'left', left_on=['Moons'], right_on=['Moons_'])
test_data.Moons.value_counts()
ll = train_data.columns.to_list()
test_data = test_data.reindex(test_data.columns.union(ll, sort=None), axis=1, fill_value=0)[ll]
prediction = pd.DataFrame()
prediction['target_r'] = model_target_r.predict(test_data.drop(['Moons', 'Moons_'], axis=1))
prediction['target_g'] = model_target_g.predict(test_data.drop(['Moons', 'Moons_'], axis=1))
prediction['target_b'] = model_target_b.predict(test_data.drop(['Moons', 'Moons_'], axis=1))
prediction.sample(3)
###Output
_____no_output_____
###Markdown
6 - Submit predictionsPast your API key here. You received it by email upon subscription and can find it on your leaderboard.
###Code
API_KEY = "x54XvYd9cjk7joIwDbCk9egCGai4y51bXapviNdTYyPM0bIdY9Y4OjNpTdmf" # <- HERE
r = requests.post("https://tournament.datacrunch.com/api/submission",
files = {
"file": ("x", prediction.to_csv().encode('ascii'))
},
data = {
"apiKey": API_KEY
},
)
if r.status_code == 200:
print("Submission submitted :)")
elif r.status_code == 423:
print("ERR: Submissions are close")
print("You can only submit during rounds eg: Friday 7pm GMT+1 to Sunday midnight GMT+1.")
print("Or the server is currently crunching the submitted files, please wait some time before retrying.")
elif r.status_code == 422:
print("ERR: API Key is missing or empty")
print("Did you forget to fill the API_KEY variable?")
elif r.status_code == 404:
print("ERR: Unknown API Key")
print("You should check that the provided API key is valid and is the same as the one you've received by email.")
elif r.status_code == 400:
print("ERR: The file must not be empty")
print("You have send a empty file.")
elif r.status_code == 401:
print("ERR: Your email hasn't been verified")
print("Please verify your email or contact a cruncher.")
elif r.status_code == 429:
print("ERR: Too many submissions")
else:
print("ERR: Server returned: " + str(r.status_code))
print("Ouch! It seems that we were not expecting this kind of result from the server, if the probleme persist, contact a cruncher.")
###Output
_____no_output_____
###Markdown
Hyperopt tuning
###Code
from hyperopt import fmin, tpe, hp, STATUS_OK, Trials, space_eval
###Output
_____no_output_____
###Markdown
How to improve your prediction sklearn Docshttps://scikit-learn.org/stable/index.htmlTuning Tuning the hyper-parameters of an estimator https://scikit-learn.org/stable/modules/grid_search.htmlCross Validation (in Python and R) https://www.analyticsvidhya.com/blog/2018/05/improve-model-performance-cross-validation-in-python-r/Possible ways to improve your prediction1- Feature extraction and feature engineering; following methods are possible: princilpal component analysis (PCA) linear discriminant analysis (LDA) selecting best features (KBest) t-SNE method for feature engineering feature interactions using PolynomialFeatures2- training multiple individual classifiers; these include: Keras neural networks Logistic regression Support vector machine Gaussian naive Bayes Random forrest classifier Extra trees classifier Gradient boost classifier AdaBoost classifier Bagging classifier Stochastic gradient descent K-Nearest neighborsGrid search and cross validation are used with some of the classifiers in order to fine tune their hyperparameters. Pipelines are used for automating tasks when needed. Keras neural network can be easily reconfigured using different number of hidden layers and/or neurons per layer, along with different training algorithms.3- aggregating individual classifiers using ensambling by soft voting, blending and stacking; possibilities include: blending with logistic regression blending with linear regression blending with Extremly randomised trees blending with Keras neural network classifier stacking with TensorFlow DNN classifier stacking with Extremly randomised trees stacking with Keras neural network classifier with Merged branches simple averageing of classifiers using different weights
###Code
# Downloads data
from google.colab import files
with open("prediciton.csv", "wb") as f:
f.write(prediction.to_csv().encode('ascii'))
files.download('prediciton.csv')
from google.colab import files
import pickle
# Export model as pickle
pickle.dump(model_target_r, open("model_target_r.model", "wb"))
files.download('model_target_r.model')
!pip freeze | grep "sklearn"
###Output
_____no_output_____ |
lessons/02_Simulated_Scan_Strategies/simscan_satellite_mpi.ipynb | ###Markdown
Simulated Satellite Scan Strategies - MPI Example
###Code
# Are you using a special reservation for a workshop?
# If so, set it here:
nersc_reservation = "toast2"
# Load common tools for all lessons
import sys
sys.path.insert(0, "..")
from lesson_tools import (
check_nersc,
)
nersc_host, nersc_repo, nersc_resv = check_nersc(reservation=nersc_reservation)
# Capture C++ output in the jupyter cells
%reload_ext wurlitzer
%%writefile simscan_satellite_mpi.py
import toast
from toast.mpi import MPI
# Load common tools for all lessons
import sys
sys.path.insert(0, "..")
from lesson_tools import (
fake_focalplane
)
import numpy as np
import healpy as hp
import matplotlib.pyplot as plt
from toast.todmap import (
slew_precession_axis,
TODSatellite,
get_submaps_nested,
OpPointingHpix,
OpAccumDiag
)
from toast.map import (
DistPixels
)
env = toast.Environment.get()
# We have many small observations, so we should use a small
# group size. Here we choose a group size of one process.
comm = toast.Comm(world=MPI.COMM_WORLD, groupsize=1)
if comm.world_rank == 0:
print(env)
# Create our fake focalplane
fp = fake_focalplane()
detnames = list(sorted(fp.keys()))
detquat = {x: fp[x]["quat"] for x in detnames}
# Scan parameters
alpha = 50.0 # precession opening angle, degrees
beta = 45.0 # spin opening angle, degrees
p_alpha = 25.0 # precession period, minutes
p_beta = 1.25 # spin period, minutes
samplerate = 8.9 # sample rate, Hz
hwprpm = 5.0 # HWP rotation in RPM
nside = 32 # Healpix NSIDE
# We will use one observation per day, with no gaps in between, and
# run for one year.
obs_samples = int(24 * 3600.0 * samplerate) - 1
nobs = 366
# Slew the precession axis so that it completes one circle
deg_per_day = 360.0 / nobs
# Create distributed data
data = toast.Data(comm)
# Append observations
for ob in range(nobs):
# Am I in the group that has this observation?
if (ob % comm.ngroups) != comm.group:
# nope...
continue
obsname = "{:03d}".format(ob)
obsfirst = ob * (obs_samples + 1)
obsstart = 24 * 3600.0
tod = TODSatellite(
comm.comm_group,
detquat,
obs_samples,
firstsamp=obsfirst,
firsttime=obsstart,
rate=samplerate,
spinperiod=p_beta,
spinangle=beta,
precperiod=p_alpha,
precangle=alpha,
coord="E",
hwprpm=hwprpm
)
qprec = np.empty(4 * tod.local_samples[1], dtype=np.float64).reshape((-1, 4))
slew_precession_axis(
qprec,
firstsamp=obsfirst,
samplerate=samplerate,
degday=deg_per_day,
)
tod.set_prec_axis(qprec=qprec)
obs = dict()
obs["tod"] = tod
data.obs.append(obs)
# Make a simple pointing matrix
pointing = OpPointingHpix(nside=nside, nest=True, mode="IQU")
pointing.exec(data)
# Compute the locally hit pixels
localpix, localsm, subnpix = get_submaps_nested(data, nside)
# Construct a distributed map to store the hit map
npix = 12 * nside**2
hits = DistPixels(
comm=data.comm.comm_world,
size=npix,
nnz=1,
dtype=np.int64,
submap=subnpix,
local=localsm,
)
hits.data.fill(0)
# Accumulate the hit map locally
build_hits = OpAccumDiag(hits=hits)
build_hits.exec(data)
# Reduce the map across processes (a No-op in this case)
hits.allreduce()
# Write out the map
hitsfile = "simscan_satellite_hits_mpi.fits"
hits.write_healpix_fits(hitsfile)
# Plot the map.
if comm.world_rank == 0:
hitdata = hp.read_map(hitsfile, nest=True)
hp.mollview(hitdata, xsize=800, nest=True, cmap="cool", min=0)
plt.savefig("{}.png".format(hitsfile))
plt.close()
import subprocess as sp
command = "python simscan_satellite_mpi.py"
runstr = None
if nersc_host is not None:
runstr = "export OMP_NUM_THREADS=4; srun -N 2 -C haswell -n 32 -c 4 --cpu_bind=cores -t 00:05:00"
if nersc_resv is not None:
runstr = "{} --reservation {}".format(runstr, nersc_resv)
else:
# Just use mpirun
runstr = "mpirun -np 4"
runcom = "{} {}".format(runstr, command)
print(runcom, flush=True)
sp.check_call(runcom, stderr=sp.STDOUT, shell=True)
###Output
_____no_output_____
###Markdown
Simulated Satellite Scan Strategies - MPI Example
###Code
# Are you using a special reservation for a workshop?
# If so, set it here:
nersc_reservation = "toast2"
# Load common tools for all lessons
import sys
sys.path.insert(0, "..")
from lesson_tools import (
check_nersc,
)
nersc_host, nersc_repo, nersc_resv = check_nersc(reservation=nersc_reservation)
# Capture C++ output in the jupyter cells
%reload_ext wurlitzer
%%writefile simscan_satellite_mpi.py
import toast
from toast.mpi import MPI
# Load common tools for all lessons
import sys
sys.path.insert(0, "..")
from lesson_tools import (
fake_focalplane
)
import numpy as np
import healpy as hp
import matplotlib.pyplot as plt
from toast.todmap import (
slew_precession_axis,
TODSatellite,
OpPointingHpix,
OpAccumDiag
)
from toast.map import (
DistPixels
)
env = toast.Environment.get()
# We have many small observations, so we should use a small
# group size. Here we choose a group size of one process.
comm = toast.Comm(world=MPI.COMM_WORLD, groupsize=1)
if comm.world_rank == 0:
print(env)
# Create our fake focalplane
fp = fake_focalplane()
detnames = list(sorted(fp.keys()))
detquat = {x: fp[x]["quat"] for x in detnames}
# Scan parameters
alpha = 50.0 # precession opening angle, degrees
beta = 45.0 # spin opening angle, degrees
p_alpha = 25.0 # precession period, minutes
p_beta = 1.25 # spin period, minutes
samplerate = 8.9 # sample rate, Hz
hwprpm = 5.0 # HWP rotation in RPM
nside = 32 # Healpix NSIDE
# We will use one observation per day, with no gaps in between, and
# run for one year.
obs_samples = int(24 * 3600.0 * samplerate) - 1
nobs = 366
# Slew the precession axis so that it completes one circle
deg_per_day = 360.0 / nobs
# Create distributed data
data = toast.Data(comm)
# Append observations
for ob in range(nobs):
# Am I in the group that has this observation?
if (ob % comm.ngroups) != comm.group:
# nope...
continue
obsname = "{:03d}".format(ob)
obsfirst = ob * (obs_samples + 1)
obsstart = 24 * 3600.0
tod = TODSatellite(
comm.comm_group,
detquat,
obs_samples,
firstsamp=obsfirst,
firsttime=obsstart,
rate=samplerate,
spinperiod=p_beta,
spinangle=beta,
precperiod=p_alpha,
precangle=alpha,
coord="E",
hwprpm=hwprpm
)
qprec = np.empty(4 * tod.local_samples[1], dtype=np.float64).reshape((-1, 4))
slew_precession_axis(
qprec,
firstsamp=obsfirst,
samplerate=samplerate,
degday=deg_per_day,
)
tod.set_prec_axis(qprec=qprec)
obs = dict()
obs["tod"] = tod
data.obs.append(obs)
# Make a simple pointing matrix
pointing = OpPointingHpix(nside=nside, nest=True, mode="IQU")
pointing.exec(data)
# Construct a distributed map to store the hit map
npix = 12 * nside**2
hits = DistPixels(
data,
nnz=1,
dtype=np.int64,
)
hits.data.fill(0)
# Accumulate the hit map locally
build_hits = OpAccumDiag(hits=hits)
build_hits.exec(data)
# Reduce the map across processes (a No-op in this case)
hits.allreduce()
# Write out the map
hitsfile = "simscan_satellite_hits_mpi.fits"
hits.write_healpix_fits(hitsfile)
# Plot the map.
if comm.world_rank == 0:
hitdata = hp.read_map(hitsfile, nest=True)
hp.mollview(hitdata, xsize=800, nest=True, cmap="cool", min=0)
plt.savefig("{}.png".format(hitsfile))
plt.close()
import subprocess as sp
command = "python simscan_satellite_mpi.py"
runstr = None
if nersc_host is not None:
runstr = "export OMP_NUM_THREADS=4; srun -N 2 -C haswell -n 32 -c 4 --cpu_bind=cores -t 00:05:00"
if nersc_resv is not None:
runstr = "{} --reservation {}".format(runstr, nersc_resv)
else:
# Just use mpirun
runstr = "mpirun -np 4"
runcom = "{} {}".format(runstr, command)
print(runcom, flush=True)
sp.check_call(runcom, stderr=sp.STDOUT, shell=True)
###Output
_____no_output_____
###Markdown
Simulated Satellite Scan Strategies - MPI Example
###Code
# Are you using a special reservation for a workshop?
# If so, set it here:
nersc_reservation = None #"toast2"
# Load common tools for all lessons
import sys
sys.path.insert(0, "..")
from lesson_tools import (
check_nersc,
)
nersc_host, nersc_repo, nersc_resv = check_nersc(reservation=nersc_reservation)
# Capture C++ output in the jupyter cells
%reload_ext wurlitzer
%%writefile simscan_satellite_mpi.py
import toast
from toast.mpi import MPI
# Load common tools for all lessons
import sys
sys.path.insert(0, "..")
from lesson_tools import (
fake_focalplane
)
import numpy as np
import healpy as hp
import matplotlib.pyplot as plt
from toast.todmap import (
slew_precession_axis,
TODSatellite,
OpPointingHpix,
OpAccumDiag
)
from toast.map import (
DistPixels
)
env = toast.Environment.get()
# We have many small observations, so we should use a small
# group size. Here we choose a group size of one process.
comm = toast.Comm(world=MPI.COMM_WORLD, groupsize=1)
if comm.world_rank == 0:
print(env)
# Create our fake focalplane
fp = fake_focalplane()
detnames = list(sorted(fp.keys()))
detquat = {x: fp[x]["quat"] for x in detnames}
# Scan parameters
alpha = 50.0 # precession opening angle, degrees
beta = 45.0 # spin opening angle, degrees
p_alpha = 25.0 # precession period, minutes
p_beta = 1.25 # spin period, minutes
samplerate = 8.9 # sample rate, Hz
hwprpm = 5.0 # HWP rotation in RPM
nside = 32 # Healpix NSIDE
# We will use one observation per day, with no gaps in between, and
# run for one year.
obs_samples = int(24 * 3600.0 * samplerate) - 1
# NOTE:
# Change this to 366 if running at NERSC (where we can use more nodes
# to get enough RAM).
#nobs = 366
nobs = 10
# Slew the precession axis so that it completes one circle
deg_per_day = 360.0 / nobs
# Create distributed data
data = toast.Data(comm)
# Append observations
for ob in range(nobs):
# Am I in the group that has this observation?
if (ob % comm.ngroups) != comm.group:
# nope...
continue
obsname = "{:03d}".format(ob)
obsfirst = ob * (obs_samples + 1)
obsstart = 24 * 3600.0
tod = TODSatellite(
comm.comm_group,
detquat,
obs_samples,
firstsamp=obsfirst,
firsttime=obsstart,
rate=samplerate,
spinperiod=p_beta,
spinangle=beta,
precperiod=p_alpha,
precangle=alpha,
coord="E",
hwprpm=hwprpm
)
qprec = np.empty(4 * tod.local_samples[1], dtype=np.float64).reshape((-1, 4))
slew_precession_axis(
qprec,
firstsamp=obsfirst,
samplerate=samplerate,
degday=deg_per_day,
)
tod.set_prec_axis(qprec=qprec)
obs = dict()
obs["tod"] = tod
data.obs.append(obs)
# Make a simple pointing matrix
pointing = OpPointingHpix(nside=nside, nest=True, mode="IQU")
pointing.exec(data)
# Construct a distributed map to store the hit map
npix = 12 * nside**2
hits = DistPixels(
data,
nnz=1,
dtype=np.int64,
)
hits.data.fill(0)
# Accumulate the hit map locally
build_hits = OpAccumDiag(hits=hits)
build_hits.exec(data)
# Reduce the map across processes (a No-op in this case)
hits.allreduce()
# Write out the map
hitsfile = "simscan_satellite_hits_mpi.fits"
hits.write_healpix_fits(hitsfile)
# Plot the map.
if comm.world_rank == 0:
hitdata = hp.read_map(hitsfile, nest=True)
hp.mollview(hitdata, xsize=800, nest=True, cmap="cool", min=0)
plt.savefig("{}.png".format(hitsfile))
plt.close()
import subprocess as sp
command = "python simscan_satellite_mpi.py"
runstr = None
if nersc_host is not None:
runstr = "export OMP_NUM_THREADS=4; srun -N 2 -C haswell -n 32 -c 4 --cpu_bind=cores -t 00:05:00"
if nersc_resv is not None:
runstr = "{} --reservation {}".format(runstr, nersc_resv)
else:
# Just use mpirun
runstr = "mpirun -np 4"
runcom = "{} {}".format(runstr, command)
print(runcom, flush=True)
sp.check_call(runcom, stderr=sp.STDOUT, shell=True)
###Output
_____no_output_____ |
2_Improving Deep Neural Networks Hyperparameter tuning Regularization and Optimization/week5/Regularization/Regularization.ipynb | ###Markdown
RegularizationWelcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen!**You will learn to:** Use regularization in your deep learning models.Let's first import the packages you are going to use.
###Code
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
###Output
/home/jovyan/work/week5/Regularization/reg_utils.py:85: SyntaxWarning: assertion is always true, perhaps remove parentheses?
assert(parameters['W' + str(l)].shape == layer_dims[l], layer_dims[l-1])
/home/jovyan/work/week5/Regularization/reg_utils.py:86: SyntaxWarning: assertion is always true, perhaps remove parentheses?
assert(parameters['W' + str(l)].shape == layer_dims[l], 1)
###Markdown
**Problem Statement**: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head. **Figure 1** : **Football field** The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head They give you the following 2D dataset from France's past 10 games.
###Code
train_X, train_Y, test_X, test_Y = load_2D_dataset()
###Output
_____no_output_____
###Markdown
Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.- If the dot is blue, it means the French player managed to hit the ball with his/her head- If the dot is red, it means the other team's player hit the ball with their head**Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball. **Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well. You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem. 1 - Non-regularized modelYou will use the following neural network (already implemented for you below). This model can be used:- in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use "`lambd`" instead of "`lambda`" because "`lambda`" is a reserved keyword in Python. - in *dropout mode* -- by setting the `keep_prob` to a value less than oneYou will first try the model without any regularization. Then, you will implement:- *L2 regularization* -- functions: "`compute_cost_with_regularization()`" and "`backward_propagation_with_regularization()`"- *Dropout* -- functions: "`forward_propagation_with_dropout()`" and "`backward_propagation_with_dropout()`"In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
###Code
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
Let's train the model without any regularization, and observe the accuracy on the train/test sets.
###Code
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6557412523481002
Cost after iteration 10000: 0.16329987525724216
Cost after iteration 20000: 0.13851642423255986
###Markdown
The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
###Code
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting. 2 - L2 RegularizationThe standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from:$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1}$$To:$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$Let's modify your cost and observe the consequences.**Exercise**: Implement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :```pythonnp.sum(np.square(Wl))```Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
###Code
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = (np.sum(np.square(W1)) + np.sum(np.square(W2)) + np.sum(np.square(W3))) * lambd / 2 / m
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
###Output
cost = 1.78648594516
###Markdown
**Expected Output**: **cost** 1.78648594516 Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost. **Exercise**: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
###Code
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + W3 * lambd / m
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + W2 * lambd / m
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + W1 * lambd / m
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = "+ str(grads["dW1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("dW3 = "+ str(grads["dW3"]))
###Output
dW1 = [[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
dW2 = [[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
dW3 = [[-1.77691347 -0.11832879 -0.09397446]]
###Markdown
**Expected Output**: **dW1** [[-0.25604646 0.12298827 -0.28297129] [-0.17706303 0.34536094 -0.4410571 ]] **dW2** [[ 0.79276486 0.85133918] [-0.0957219 -0.01720463] [-0.13100772 -0.03750433]] **dW3** [[-1.77691347 -0.11832879 -0.09397446]] Let's now run the model with L2 regularization $(\lambda = 0.7)$. The `model()` function will call: - `compute_cost_with_regularization` instead of `compute_cost`- `backward_propagation_with_regularization` instead of `backward_propagation`
###Code
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6974484493131264
Cost after iteration 10000: 0.2684918873282239
Cost after iteration 20000: 0.2680916337127301
###Markdown
Congrats, the test set accuracy increased to 93%. You have saved the French football team!You are not overfitting the training data anymore. Let's plot the decision boundary.
###Code
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.**What is L2-regularization actually doing?**:L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes. **What you should remember** -- the implications of L2-regularization on:- The cost computation: - A regularization term is added to the cost- The backpropagation function: - There are extra terms in the gradients with respect to weight matrices- Weights end up smaller ("weight decay"): - Weights are pushed to smaller values. 3 - DropoutFinally, **dropout** is a widely used regularization technique that is specific to deep learning. **It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means!<!--To understand drop-out, consider this conversation with a friend:- Friend: "Why do you need all these neurons to train your network and classify images?". - You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."!--> Figure 2 : Drop-out on the second hidden layer. At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\_prob$ or keep it with probability $keep\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. Figure 3 : Drop-out on the first and third hidden layers. $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time. 3.1 - Forward propagation with dropout**Exercise**: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer. **Instructions**:You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$.2. Set each entry of $D^{[1]}$ to be 0 with probability (`1-keep_prob`) or 1 with probability (`keep_prob`), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: `X = (X < 0.5)`. Note that 0 and 1 are respectively equivalent to False and True.3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
###Code
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0], A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = np.where(D1 <= keep_prob, 1 , 0 ) # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = A1 * D1 # Step 3: shut down some neurons of A1
A1 = A1/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0], A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = np.where(D2 <= keep_prob, 1 , 0 ) # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = A2 * D2 # Step 3: shut down some neurons of A2
A2 = A2/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
###Output
A3 = [[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
###Markdown
**Expected Output**: **A3** [[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]] 3.2 - Backward propagation with dropout**Exercise**: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache. **Instruction**:Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`. 2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`).
###Code
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = dA2 * D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2/ keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = dA1 * D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1/ keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = " + str(gradients["dA1"]))
print ("dA2 = " + str(gradients["dA2"]))
###Output
dA1 = [[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
dA2 = [[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
###Markdown
**Expected Output**: **dA1** [[ 0.36544439 0. -0.00188233 0. -0.17408748] [ 0.65515713 0. -0.00337459 0. -0. ]] **dA2** [[ 0.58180856 0. -0.00299679 0. -0.27715731] [ 0. 0.53159854 -0. 0.53159854 -0.34089673] [ 0. 0. -0.00292733 0. -0. ]] Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 24% probability. The function `model()` will now call:- `forward_propagation_with_dropout` instead of `forward_propagation`.- `backward_propagation_with_dropout` instead of `backward_propagation`.
###Code
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6543912405149825
###Markdown
Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you! Run the code below to plot the decision boundary.
###Code
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____ |
notebooks/visualization_matplotlib_seaborn_usage.ipynb | ###Markdown
Data Visualization matplotlib & seaborn usage
###Code
## plot type
### comparision
* line plot(折线图)
### relation
* scatter plot(散点图)
### compose
* pie plot(饼图)
### distribution
* dist plot(直方图)
### other
* Box plot(箱线图)
* Heat map(热力图)
* Radar chart(雷达图、蜘蛛图)
* Bivariate distributions
* Scatter plots
* Hexbin char
* Kernel density estimation
* Pairwise relationship chart
###Output
_____no_output_____
###Markdown
Comparision line plot matplotlib
###Code
import matplotlib.pyplot as plot
x = [2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019]
y = [5, 3, 6, 20, 17, 16, 19, 30, 32, 35]
# y_compare = [i + 2 for i in y]
plt.plot(x, y)
# plt.plot(x, y_compare)
plt.show()
###Output
_____no_output_____
###Markdown
seaborn
###Code
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
x = [2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019]
y = [5, 3, 6, 20, 17, 16, 19, 30, 32, 35]
y_compare = [i + 2 for i in y]
df = pd.DataFrame({'x': x, 'y': y, 'y_c': y_compare})
sns.relplot(x="x", y="y", kind="line", data=df)
plt.show()
###Output
_____no_output_____
###Markdown
relation matplot
###Code
import numpy as np
import matplotlib.pyplot as plt
N = 1000
x_axis_array = np.random.rand(N)
y_axis_array = np.random.rand(N)
plt.scatter(x_axis_array, y_axis_array, marker='*')
plt.show()
###Output
_____no_output_____
###Markdown
seaborn
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
N = 1000
x_axis_array = np.random.rand(N)
y_axis_array = np.random.rand(N)
df = pd.DataFrame({'x':x_axis_array, 'y': y_axis_array})
sns.jointplot(x='x', y='y', data=df, kind='scatter')
plot.show()
###Output
_____no_output_____
###Markdown
compose matplot
###Code
import matplotlib.pyplot as plt
nums = [25, 37, 33, 37, 6]
labels = ['High-school','Bachelor','Master','Ph.d', 'Others']
plt.pie(x = nums, labels=labels)
plt.show()
###Output
_____no_output_____
###Markdown
distribution matplot
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
a = np.random.randn(100)
s = pd.Series(a)
plt.hist(s)
plt.show()
###Output
_____no_output_____
###Markdown
seaborn
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
a = np.random.randn(100)
s = pd.Series(a)
sns.distplot(s, kde=False)
plt.show()
sns.distplot(s, kde=True)
plt.show()
###Output
_____no_output_____
###Markdown
other box plot matplot
###Code
import numpy as np
import matplotlib.pyplot as plt
data=np.random.normal(size=(10,4))
lables = ['A','B','C','D']
plt.boxplot(data,labels=lables)
plt.show()
###Output
_____no_output_____
###Markdown
seaborn
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data=np.random.normal(size=(10,4))
lables = ['A','B','C','D']
df = pd.DataFrame(data, columns=lables)
sns.boxplot(data=df)
plt.show()
###Output
_____no_output_____
###Markdown
heatmap
###Code
import matplotlib.pyplot as plt
import seaborn as sns
flights = sns.load_dataset("flights")
data=flights.pivot('year','month','passengers')
sns.heatmap(data)
plt.show()
###Output
_____no_output_____
###Markdown
radar chart
###Code
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
labels=np.array([u"var1", "var2", "var3", "var4", "var5", "var6"])
stats=[83, 61, 95, 67, 76, 88]
angles=np.linspace(0, 2*np.pi, len(labels), endpoint=False)
stats=np.concatenate((stats,[stats[0]]))
angles=np.concatenate((angles,[angles[0]]))
fig = plt.figure()
ax = fig.add_subplot(111, polar=True)
ax.plot(angles, stats, 'o-', linewidth=2)
ax.fill(angles, stats, alpha=0.25)
ax.set_thetagrids(angles * 180/np.pi, labels)
plt.show()
###Output
_____no_output_____
###Markdown
Bivariate distributions Scatter plots
###Code
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
mean, cov = [0, 1], [(1, .5), (.5, 1)]
data = np.random.multivariate_normal(mean, cov, 200)
df = pd.DataFrame(data, columns=["x", "y"])
# scatter plots
sns.jointplot(x="x", y="y", data=df, kind='scatter')
# Hexbin plots
x, y = np.random.multivariate_normal(mean, cov, 1000).T
with sns.axes_style("white"):
sns.jointplot(x=x, y=y, kind="hex", color="k")
# Kernel density estimation
sns.jointplot(x="x", y="y", data=df, kind="kde")
plt.show()
###Output
_____no_output_____
###Markdown
Hexbin plots
###Code
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
mean, cov = [0, 1], [(1, .5), (.5, 1)]
data = np.random.multivariate_normal(mean, cov, 200)
df = pd.DataFrame(data, columns=["x", "y"])
# Hexbin plots
x, y = np.random.multivariate_normal(mean, cov, 1000).T
with sns.axes_style("white"):
sns.jointplot(x=x, y=y, kind="hex", color="k")
plt.show()
###Output
_____no_output_____
###Markdown
Kernel density estimation
###Code
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
mean, cov = [0, 1], [(1, .5), (.5, 1)]
data = np.random.multivariate_normal(mean, cov, 200)
df = pd.DataFrame(data, columns=["x", "y"])
# Kernel density estimation
sns.jointplot(x="x", y="y", data=df, kind="kde")
plt.show()
###Output
_____no_output_____ |
Keras-Week_4-CNN.ipynb | ###Markdown
Convolutional Neural Networks with Keras In this lab, we will learn how to use the Keras library to build convolutional neural networks. We will also use the popular MNIST dataset and we will compare our results to using a conventional neural network. Convolutional Neural Networks with KerasObjective for this Notebook 1. How to use the Keras library to build convolutional neural networks. 2. Convolutional Neural Network with One Convolutional and Pooling Layers. 3. Convolutional Neural Network with Two Convolutional and Pooling Layers. Table of Contents 1. Import Keras and Packages 2. Convolutional Neural Network with One Convolutional and Pooling Layers 3. Convolutional Neural Network with Two Convolutional and Pooling Layers Import Keras and Packages Let's start by importing the keras libraries and the packages that we would need to build a neural network.
###Code
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import to_categorical
###Output
Using TensorFlow backend.
###Markdown
When working with convolutional neural networks in particular, we will need additional packages.
###Code
from keras.layers.convolutional import Conv2D # to add convolutional layers
from keras.layers.convolutional import MaxPooling2D # to add pooling layers
from keras.layers import Flatten # to flatten data for fully connected layers
###Output
_____no_output_____
###Markdown
Convolutional Layer with One set of convolutional and pooling layers
###Code
# import data
from keras.datasets import mnist
# load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# reshape to be [samples][pixels][width][height]
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32')
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32')
###Output
Downloading data from https://s3.amazonaws.com/img-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
Let's normalize the pixel values to be between 0 and 1
###Code
X_train = X_train / 255 # normalize training data
X_test = X_test / 255 # normalize test data
###Output
_____no_output_____
###Markdown
Next, let's convert the target variable into binary categories
###Code
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
num_classes = y_test.shape[1] # number of categories
###Output
_____no_output_____
###Markdown
Next, let's define a function that creates our model. Let's start with one set of convolutional and pooling layers.
###Code
def convolutional_model():
# create model
model = Sequential()
model.add(Conv2D(16, (5, 5), strides=(1, 1), activation='relu', input_shape=(28, 28, 1)))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
# compile model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Finally, let's call the function to create the model, and then let's train it and evaluate it.
###Code
# build the model
model = convolutional_model()
# fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200, verbose=2)
# evaluate the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: {} \n Error: {}".format(scores[1], 100-scores[1]*100))
###Output
WARNING:tensorflow:From /opt/conda/envs/Python36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /opt/conda/envs/Python36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Train on 60000 samples, validate on 10000 samples
Epoch 1/10
- 89s - loss: 0.3024 - acc: 0.9146 - val_loss: 0.1128 - val_acc: 0.9659
Epoch 2/10
- 81s - loss: 0.0919 - acc: 0.9731 - val_loss: 0.0718 - val_acc: 0.9759
Epoch 3/10
- 81s - loss: 0.0611 - acc: 0.9817 - val_loss: 0.0496 - val_acc: 0.9825
Epoch 4/10
- 81s - loss: 0.0472 - acc: 0.9861 - val_loss: 0.0490 - val_acc: 0.9836
Epoch 5/10
- 88s - loss: 0.0384 - acc: 0.9882 - val_loss: 0.0433 - val_acc: 0.9857
Epoch 6/10
- 92s - loss: 0.0317 - acc: 0.9902 - val_loss: 0.0379 - val_acc: 0.9876
Epoch 7/10
- 87s - loss: 0.0259 - acc: 0.9922 - val_loss: 0.0357 - val_acc: 0.9875
Epoch 8/10
- 84s - loss: 0.0222 - acc: 0.9931 - val_loss: 0.0394 - val_acc: 0.9875
Epoch 9/10
- 79s - loss: 0.0177 - acc: 0.9948 - val_loss: 0.0406 - val_acc: 0.9878
Epoch 10/10
- 85s - loss: 0.0157 - acc: 0.9953 - val_loss: 0.0344 - val_acc: 0.9894
Accuracy: 0.9894
Error: 1.0600000000000023
###Markdown
* * * Convolutional Layer with two sets of convolutional and pooling layers Let's redefine our convolutional model so that it has two convolutional and pooling layers instead of just one layer of each.
###Code
def convolutional_model():
# create model
model = Sequential()
model.add(Conv2D(16, (5, 5), activation='relu', input_shape=(28, 28, 1)))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Conv2D(8, (2, 2), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Now, let's call the function to create our new convolutional neural network, and then let's train it and evaluate it.
###Code
# build the model
model = convolutional_model()
# fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200, verbose=2)
# evaluate the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: {} \n Error: {}".format(scores[1], 100-scores[1]*100))
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/10
- 164s - loss: 0.4755 - acc: 0.8641 - val_loss: 0.1470 - val_acc: 0.9553
Epoch 2/10
- 169s - loss: 0.1244 - acc: 0.9633 - val_loss: 0.0824 - val_acc: 0.9736
Epoch 3/10
- 168s - loss: 0.0866 - acc: 0.9738 - val_loss: 0.0659 - val_acc: 0.9789
Epoch 4/10
- 202s - loss: 0.0717 - acc: 0.9788 - val_loss: 0.0581 - val_acc: 0.9812
Epoch 5/10
- 125s - loss: 0.0590 - acc: 0.9818 - val_loss: 0.0497 - val_acc: 0.9832
Epoch 6/10
- 122s - loss: 0.0536 - acc: 0.9833 - val_loss: 0.0521 - val_acc: 0.9831
Epoch 7/10
- 128s - loss: 0.0474 - acc: 0.9856 - val_loss: 0.0469 - val_acc: 0.9850
Epoch 8/10
- 121s - loss: 0.0417 - acc: 0.9869 - val_loss: 0.0398 - val_acc: 0.9867
Epoch 9/10
- 135s - loss: 0.0391 - acc: 0.9878 - val_loss: 0.0415 - val_acc: 0.9863
Epoch 10/10
- 122s - loss: 0.0359 - acc: 0.9889 - val_loss: 0.0366 - val_acc: 0.9885
Accuracy: 0.9885
Error: 1.1499999999999915
|
09 - Create a Real-time Inferencing Service.ipynb | ###Markdown
Create a real-time inferencing serviceAfter training a predictive model, you can deploy it as a real-time service that clients can use to get predictions from new data. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.33.0 to work with wsag
###Markdown
Train and register a modelNow let's train and register a model.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness',
'SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
Starting experiment: mslearn-train-diabetes
Loading Data...
Training a decision tree model
Accuracy: 0.8893333333333333
AUC: 0.8783073784765411
Model trained and registered.
###Markdown
Deploy the model as a web serviceYou have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.First, let's determine what models you have registered in the workspace.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 1
Training context : Inline Training
AUC : 0.8783073784765411
Accuracy : 0.8893333333333333
###Markdown
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
diabetes_model version 1
###Markdown
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
###Code
import os
folder_name = 'diabetes_service'
# Create a folder for the web service files
experiment_folder = './' + folder_name
os.makedirs(experiment_folder, exist_ok=True)
print(folder_name, 'folder created.')
# Set path for scoring script and environment files
script_file = os.path.join(experiment_folder,"score_diabetes.py")
env_file = os.path.join(experiment_folder,"diabetes_env.yml")
###Output
diabetes_service folder created.
###Markdown
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* (often called a *scoring script*) that will be deployed to the web service:
###Code
%%writefile $script_file
import json
import joblib
import numpy as np
from azureml.core.model import Model
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
Overwriting ./diabetes_service\score_diabetes.py
###Markdown
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires the **scikit-learn** and **azureml-defaults** packages, so we'll create a .yml file that tells the container host to install them into the environment.
###Code
%%writefile $env_file
name: inference_env
dependencies:
- python=3.6.2
- scikit-learn
- pip
- pip:
- azureml-defaults
###Output
Overwriting ./diabetes_service\diabetes_env.yml
###Markdown
Now you're ready to deploy. We'll deploy the container a service named **diabetes-service**. The deployment process includes the following steps:1. Define an inference configuration, which includes the scoring and environment files required to load and use the model.2. Define a deployment configuration that defines the execution environment in which the service will be hosted. In this case, an Azure Container Instance.3. Deploy the model as a web service.4. Verify the status of the deployed service.> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
###Code
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# Configure the scoring environment
inference_config = InferenceConfig(runtime= "python",
entry_script=script_file,
conda_file=env_file)
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
###Output
Tips: You can try get_logs(): https://aka.ms/debugimage#dockerlog or local deployment: https://aka.ms/debugimage#debug-locally to debug if deployment takes longer than 10 minutes.
Running
2021-08-16 08:12:06+05:30 Creating Container Registry if not exists..
2021-08-16 08:22:07+05:30 Registering the environment..
2021-08-16 08:22:12+05:30 Building image..
2021-08-16 08:33:31+05:30 Generating deployment configuration.
2021-08-16 08:33:33+05:30 Submitting deployment to compute..
2021-08-16 08:33:46+05:30 Checking the status of deployment diabetes-service..
2021-08-16 08:36:23+05:30 Checking the status of inference endpoint diabetes-service.
Succeeded
ACI service creation operation finished, operation "Succeeded"
Healthy
###Markdown
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to get the service logs to help you troubleshoot.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
2021-08-16T03:06:14,551615800+00:00 - iot-server/run
2021-08-16T03:06:14,565298800+00:00 - gunicorn/run
Dynamic Python package installation is disabled.
Starting HTTP server
2021-08-16T03:06:14,569462400+00:00 - nginx/run
2021-08-16T03:06:14,567179600+00:00 - rsyslog/run
EdgeHubConnectionString and IOTEDGE_IOTHUBHOSTNAME are not set. Exiting...
2021-08-16T03:06:14,994119900+00:00 - iot-server/finish 1 0
2021-08-16T03:06:14,999089400+00:00 - Exit code 1 is normal. Not restarting iot-server.
Starting gunicorn 20.1.0
Listening at: http://127.0.0.1:31311 (62)
Using worker: sync
worker timeout is set to 300
Booting worker with pid: 90
SPARK_HOME not set. Skipping PySpark Initialization.
Initializing logger
2021-08-16 03:06:17,237 | root | INFO | Starting up app insights client
logging socket was found. logging is available.
logging socket was found. logging is available.
2021-08-16 03:06:17,239 | root | INFO | Starting up request id generator
2021-08-16 03:06:17,239 | root | INFO | Starting up app insight hooks
2021-08-16 03:06:17,242 | root | INFO | Invoking user's init function
no request id,/azureml-envs/azureml_e220b045f6c3c3008b1a386af067185d/lib/python3.6/site-packages/sklearn/base.py:315: UserWarning: Trying to unpickle estimator DecisionTreeClassifier from version 0.24.1 when using version 0.24.2. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
2021-08-16 03:06:17,894 | root | INFO | Users's init has completed successfully
/azureml-envs/azureml_e220b045f6c3c3008b1a386af067185d/lib/python3.6/site-packages/sklearn/base.py:315: UserWarning: Trying to unpickle estimator DecisionTreeClassifier from version 0.24.1 when using version 0.24.2. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
2021-08-16 03:06:17,901 | root | INFO | Skipping middleware: dbg_model_info as it's not enabled.
2021-08-16 03:06:17,901 | root | INFO | Skipping middleware: dbg_resource_usage as it's not enabled.
2021-08-16 03:06:17,902 | root | INFO | Scoring timeout is found from os.environ: 60000 ms
2021-08-16 03:06:21,907 | root | INFO | Swagger file not present
2021-08-16 03:06:21,908 | root | INFO | 404
127.0.0.1 - - [16/Aug/2021:03:06:21 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
2021-08-16 03:06:26,117 | root | INFO | Swagger file not present
2021-08-16 03:06:26,117 | root | INFO | 404
127.0.0.1 - - [16/Aug/2021:03:06:26 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
2021-08-16 03:08:40,791 | root | INFO | Swagger file not present
2021-08-16 03:08:40,792 | root | INFO | 404
127.0.0.1 - - [16/Aug/2021:03:08:40 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
###Markdown
Take a look at your workspace in [Azure Machine Learning Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.You can also retrieve the names of web services in your workspace by running the following code:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
diabetes-service
###Markdown
Use the web serviceWith the service deployed, now you can consume it from a client application.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
Patient: [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22]
diabetic
###Markdown
You can also send multiple patient observations to the service, and get back a prediction for each one.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
The code above uses the Azure Machine Learning SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure Machine Learning SDK, but simply make HTTP requests to the web service.Let's determine the URL to which these applications must submit their requests:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
http://21f48e09-cdc5-4cd2-a85c-ae04e4d64179.centralindia.azurecontainer.io/score
###Markdown
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON format, and receive back the predicted class(es).
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling token-based authentication. This would require REST requests to include an **Authorization** header. Delete the serviceWhen you no longer need your service, you should delete it to avoid incurring unecessary charges.
###Code
service.delete()
print ('Service deleted.')
###Output
Service deleted.
###Markdown
Create a real-time inferencing serviceAfter training a predictive model, you can deploy it as a real-time service that clients can use to get predictions from new data. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Train and register a modelNow let's train and register a model.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
_____no_output_____
###Markdown
Deploy the model as a web serviceYou have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.First, let's determine what models you have registered in the workspace.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
_____no_output_____
###Markdown
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
###Code
import os
# Create a folder for the deployment files
deployment_folder = './diabetes_service'
os.makedirs(deployment_folder, exist_ok=True)
print(deployment_folder, 'folder created.')
# Set path for scoring script
script_file = 'score_diabetes.py'
script_path = os.path.join(deployment_folder,script_file)
###Output
_____no_output_____
###Markdown
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* (often called a *scoring script*) that will be deployed to the web service.The script consists of two functions:- **init**: This fucntion is called when the service is initialized, and is generally used to load the model. Note that the scoring script uses the **AZUREML_MODEL_DIR** environment variable to determine the folder where the model is stored.- **run**: This function is called each time a client application submits new data, and is generally used to inference predictions from the model.
###Code
%%writefile $script_path
import json
import joblib
import numpy as np
import os
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'diabetes_model.pkl')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
_____no_output_____
###Markdown
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires **scikit-learn** and some Azure Machine Learning specific packages that are used by the scoring web service, so we'll create an environment that included these. Then we'll add that environment to an *inference configuration* along with the scoring script, and define a *deployment configuration* for the container in which the environment and script will be hosted.We can then deploy the model as a service based on these configurations.> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
###Code
from azureml.core import Environment
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
# Configure the scoring environment
service_env = Environment(name='service-env')
python_packages = ['scikit-learn', 'azureml-defaults', 'azure-ml-api-sdk']
for package in python_packages:
service_env.python.conda_dependencies.add_pip_package(package)
inference_config = InferenceConfig(source_directory=deployment_folder,
entry_script=script_file,
environment=service_env)
# Configure the web service container
deployment_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)
# Deploy the model as a service
print('Deploying model...')
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config, overwrite=True)
service.wait_for_deployment(True)
print(service.state)
###Output
_____no_output_____
###Markdown
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to get the service logs to help you troubleshoot.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
_____no_output_____
###Markdown
Take a look at your workspace in [Azure Machine Learning Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.You can also retrieve the names of web services in your workspace by running the following code:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
_____no_output_____
###Markdown
Use the web serviceWith the service deployed, now you can consume it from a client application.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
_____no_output_____
###Markdown
You can also send multiple patient observations to the service, and get back a prediction for each one.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
The code above uses the Azure Machine Learning SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure Machine Learning SDK, but simply make HTTP requests to the web service.Let's determine the URL to which these applications must submit their requests:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
_____no_output_____
###Markdown
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON format, and receive back the predicted class(es).
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling token-based authentication. This would require REST requests to include an **Authorization** header. Delete the serviceWhen you no longer need your service, you should delete it to avoid incurring unecessary charges.
###Code
service.delete()
print ('Service deleted.')
###Output
_____no_output_____
###Markdown
Create a real-time inferencing serviceAfter training a predictive model, you can deploy it as a real-time service that clients can use to get predictions from new data. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Train and register a modelNow let's train and register a model.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
_____no_output_____
###Markdown
Deploy the model as a web serviceYou have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.First, let's determine what models you have registered in the workspace.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
_____no_output_____
###Markdown
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
###Code
import os
folder_name = 'diabetes_service'
# Create a folder for the web service files
experiment_folder = './' + folder_name
os.makedirs(experiment_folder, exist_ok=True)
print(folder_name, 'folder created.')
# Set path for scoring script and environment files
script_file = os.path.join(experiment_folder,"score_diabetes.py")
env_file = os.path.join(experiment_folder,"diabetes_env.yml")
###Output
_____no_output_____
###Markdown
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* (often called a *scoring script*) that will be deployed to the web service:
###Code
%%writefile $script_file
import json
import joblib
import numpy as np
from azureml.core.model import Model
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
_____no_output_____
###Markdown
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires **scikit-learn** and some Azure Machine Learning specific packages that are used by the scoring web service, so we'll create a .yml file that tells the container host to install them into the environment.
###Code
%%writefile $env_file
name: inference_env
dependencies:
- python=3.6.2
- scikit-learn
- pip
- pip:
- azureml-defaults
- azure-ml-api-sdk
###Output
_____no_output_____
###Markdown
Now you're ready to deploy. We'll deploy the container a service named **diabetes-service**. The deployment process includes the following steps:1. Define an inference configuration, which includes the scoring and environment files required to load and use the model.2. Define a deployment configuration that defines the execution environment in which the service will be hosted. In this case, an Azure Container Instance.3. Deploy the model as a web service.4. Verify the status of the deployed service.> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
###Code
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# Configure the scoring environment
inference_config = InferenceConfig(runtime= "python",
entry_script=script_file,
conda_file=env_file)
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
###Output
_____no_output_____
###Markdown
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to get the service logs to help you troubleshoot.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
_____no_output_____
###Markdown
Take a look at your workspace in [Azure Machine Learning Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.You can also retrieve the names of web services in your workspace by running the following code:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
_____no_output_____
###Markdown
Use the web serviceWith the service deployed, now you can consume it from a client application.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
_____no_output_____
###Markdown
You can also send multiple patient observations to the service, and get back a prediction for each one.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
The code above uses the Azure Machine Learning SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure Machine Learning SDK, but simply make HTTP requests to the web service.Let's determine the URL to which these applications must submit their requests:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
_____no_output_____
###Markdown
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON format, and receive back the predicted class(es).
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling token-based authentication. This would require REST requests to include an **Authorization** header. Delete the serviceWhen you no longer need your service, you should delete it to avoid incurring unecessary charges.
###Code
service.delete()
print ('Service deleted.')
###Output
_____no_output_____
###Markdown
Create a real-time inferencing serviceAfter training a predictive model, you can deploy it as a real-time service that clients can use to get predictions from new data. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.22.0 to work with azure_ds_challenge
###Markdown
Train and register a modelNow let's train and register a model.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
Starting experiment: mslearn-train-diabetes
Loading Data...
Training a decision tree model
Accuracy: 0.8926666666666667
AUC: 0.8788695955022637
Model trained and registered.
###Markdown
Deploy the model as a web serviceYou have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.First, let's determine what models you have registered in the workspace.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 7
Training context : Inline Training
AUC : 0.8788695955022637
Accuracy : 0.8926666666666667
diabetes_model version: 6
Training context : Pipeline
AUC : 0.8837616052365906
Accuracy : 0.8988888888888888
diabetes_model version: 5
Training context : Compute cluster
AUC : 0.8832499705804543
Accuracy : 0.8977777777777778
diabetes_model version: 4
Training context : File dataset
AUC : 0.8568517900798176
Accuracy : 0.7891111111111111
diabetes_model version: 3
Training context : Tabular dataset
AUC : 0.8568595320655352
Accuracy : 0.7891111111111111
diabetes_model version: 2
Training context : Parameterized script
AUC : 0.8484357430717946
Accuracy : 0.774
diabetes_model version: 1
Training context : Script
AUC : 0.8483203144435048
Accuracy : 0.774
###Markdown
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
diabetes_model version 7
###Markdown
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
###Code
import os
folder_name = 'diabetes_service'
# Create a folder for the web service files
experiment_folder = './' + folder_name
os.makedirs(experiment_folder, exist_ok=True)
print(folder_name, 'folder created.')
# Set path for scoring script
script_file = os.path.join(experiment_folder,"score_diabetes.py")
###Output
diabetes_service folder created.
###Markdown
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* (often called a *scoring script*) that will be deployed to the web service:
###Code
%%writefile $script_file
import json
import joblib
import numpy as np
from azureml.core.model import Model
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
Writing ./diabetes_service/score_diabetes.py
###Markdown
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires **scikit-learn**, so we'll create a .yml file that tells the container host to install this into the environment.
###Code
from azureml.core.conda_dependencies import CondaDependencies
# Add the dependencies for our model (AzureML defaults is already included)
myenv = CondaDependencies()
myenv.add_conda_package('scikit-learn')
# Save the environment config as a .yml file
env_file = os.path.join(experiment_folder,"diabetes_env.yml")
with open(env_file,"w") as f:
f.write(myenv.serialize_to_string())
print("Saved dependency info in", env_file)
# Print the .yml file
with open(env_file,"r") as f:
print(f.read())
###Output
Saved dependency info in ./diabetes_service/diabetes_env.yml
# Conda environment specification. The dependencies defined in this file will
# be automatically provisioned for runs with userManagedDependencies=False.
# Details about the Conda environment file format:
# https://conda.io/docs/user-guide/tasks/manage-environments.html#create-env-file-manually
name: project_environment
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-defaults
- scikit-learn
channels:
- anaconda
- conda-forge
###Markdown
Now you're ready to deploy. We'll deploy the container a service named **diabetes-service**. The deployment process includes the following steps:1. Define an inference configuration, which includes the scoring and environment files required to load and use the model.2. Define a deployment configuration that defines the execution environment in which the service will be hosted. In this case, an Azure Container Instance.3. Deploy the model as a web service.4. Verify the status of the deployed service.> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
###Code
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# Configure the scoring environment
inference_config = InferenceConfig(runtime= "python",
entry_script=script_file,
conda_file=env_file)
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
###Output
Tips: You can try get_logs(): https://aka.ms/debugimage#dockerlog or local deployment: https://aka.ms/debugimage#debug-locally to debug if deployment takes longer than 10 minutes.
Running.....................................................................................................................................
Succeeded
ACI service creation operation finished, operation "Succeeded"
Healthy
###Markdown
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to get the service logs to help you troubleshoot.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
2021-03-25T21:31:52,362150500+00:00 - gunicorn/run
2021-03-25T21:31:52,371024100+00:00 - rsyslog/run
2021-03-25T21:31:52,381009700+00:00 - iot-server/run
2021-03-25T21:31:52,418078600+00:00 - nginx/run
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libcrypto.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libcrypto.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
EdgeHubConnectionString and IOTEDGE_IOTHUBHOSTNAME are not set. Exiting...
2021-03-25T21:31:54,449069600+00:00 - iot-server/finish 1 0
Starting gunicorn 19.9.0
2021-03-25T21:31:54,457111800+00:00 - Exit code 1 is normal. Not restarting iot-server.
Listening at: http://127.0.0.1:31311 (65)
Using worker: sync
worker timeout is set to 300
Booting worker with pid: 97
SPARK_HOME not set. Skipping PySpark Initialization.
Initializing logger
2021-03-25 21:32:00,390 | root | INFO | Starting up app insights client
2021-03-25 21:32:00,390 | root | INFO | Starting up request id generator
2021-03-25 21:32:00,390 | root | INFO | Starting up app insight hooks
2021-03-25 21:32:00,390 | root | INFO | Invoking user's init function
/azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/python3.6/site-packages/sklearn/base.py:334: UserWarning: Trying to unpickle estimator DecisionTreeClassifier from version 0.22.2.post1 when using version 0.23.2. This might lead to breaking code or invalid results. Use at your own risk.
2021-03-25 21:32:03,591 | root | INFO | Users's init has completed successfully
UserWarning)
2021-03-25 21:32:03,600 | root | INFO | Skipping middleware: dbg_model_info as it's not enabled.
2021-03-25 21:32:03,600 | root | INFO | Skipping middleware: dbg_resource_usage as it's not enabled.
2021-03-25 21:32:03,602 | root | INFO | Scoring timeout is found from os.environ: 60000 ms
2021-03-25 21:32:05,154 | root | INFO | Swagger file not present
2021-03-25 21:32:05,155 | root | INFO | 404
127.0.0.1 - - [25/Mar/2021:21:32:05 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
2021-03-25 21:32:08,687 | root | INFO | Swagger file not present
2021-03-25 21:32:08,688 | root | INFO | 404
127.0.0.1 - - [25/Mar/2021:21:32:08 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
###Markdown
Take a look at your workspace in [Azure Machine Learning Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.You can also retrieve the names of web services in your workspace by running the following code:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
diabetes-service
###Markdown
Use the web serviceWith the service deployed, now you can consume it from a client application.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
Patient: [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22]
diabetic
###Markdown
You can also send multiple patient observations to the service, and get back a prediction for each one.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
The code above uses the Azure Machine Learning SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure Machine Learning SDK, but simply make HTTP requests to the web service.Let's determine the URL to which these applications must submit their requests:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
http://89f6658a-543c-4d2c-bcbb-5dee807a9381.eastus2.azurecontainer.io/score
###Markdown
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON format, and receive back the predicted class(es).
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling token-based authentication. This would require REST requests to include an **Authorization** header. Delete the serviceWhen you no longer need your service, you should delete it to avoid incurring unecessary charges.
###Code
service.delete()
print ('Service deleted.')
###Output
Service deleted.
###Markdown
リアルタイム推論サービスを作成する
予測モデルのトレーニング後、クライアントが新しいデータから予測を取得するために使用できるリアルタイム サービスとしてモデルをデプロイできます。 ワークスペースに接続する
作業を開始するには、ワークスペースに接続します。
> **注**: Azure サブスクリプションでまだ認証済みのセッションを確立していない場合は、リンクをクリックして認証コードを入力し、Azure にサインインして認証するよう指示されます。
###Code
import azureml.core
from azureml.core import Workspace
# 保存された構成ファイルからワークスペースを読み込む
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
モデルをトレーニングして登録する
それでは、モデルをトレーニングして登録しましょう。
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# ワークスペースで Azure 実験を作成する
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# 糖尿病データセットを読み込む
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# 特徴とラベルを分離する
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# データをトレーニング セットとテスト セットに分割する
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# デシジョン ツリー モデルをトレーニングする
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# 精度を計算する
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# AUC を計算する
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# トレーニング済みモデルを保存する
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# 実行を完了する
run.complete()
# モデルを登録する
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
_____no_output_____
###Markdown
モデルを Web サービスとして公開する
糖尿病の可能性に基づいて患者を分類する機械学習モデルをトレーニングし、登録しました。このモデルは、糖尿病の臨床検査を受ける必要があるとリスクがあると考えられる患者のみが必要な医師の手術などの運用環境で使用できます。このシナリオをサポートするには、モデルを Web サービスとしてデプロイします。
まず、ワークスペースに登録したモデルを決定しましょう。
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
それでは、デプロイしたいモデルを取得しましょう。既定では、モデル名を指定すると、最新バージョンが返されます。
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
_____no_output_____
###Markdown
このモデルをホストする Web サービスを作成しますが、これにはコードと構成ファイルが必要です。そのため、それらのフォルダーを作成してみましょう。
###Code
import os
folder_name = 'diabetes_service'
# Web サービス ファイル用フォルダーを作成する
experiment_folder = './' + folder_name
os.makedirs(experiment_folder, exist_ok=True)
print(folder_name, 'folder created.')
# スクリプトと環境ファイルをスコアリングするためのパスを設定する
script_file = os.path.join(experiment_folder,"score_diabetes.py")
env_file = os.path.join(experiment_folder,"diabetes_env.yml")
###Output
_____no_output_____
###Markdown
モデルをデプロイする Web サービスでは、入力データを読み込み、ワークスペースからモデルを取得し、予測を生成して返すために、Python コードが必要になります。このコードは、Web サービスにデプロイされる*エントリ スクリプト* (頻繁に*スコアリング スクリプト*と呼ばれます) に保存します。
###Code
%%writefile $script_file
import json
import joblib
import numpy as np
from azureml.core.model import Model
# サービスの読み込み時に呼び出される
def init():
global model
# Get the path to the deployed model file and load it
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# 要求の受信時に呼び出される
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
_____no_output_____
###Markdown
Web サービスはコンテナーでホストされ、コンテナーは初期化されるときに必要な Python 依存関係をインストールする必要があります。この場合、スコアリング コードには **scikit-learn** と **azureml-defaults** パッケージが必要なので、コンテナー ホストに環境にインストールするよう指示する .yml ファイルを作成します。
###Code
%%writefile $env_file
name: inference_env
dependencies:
- python=3.6.2
- scikit-learn
- pip
- pip:
- azureml-defaults
###Output
_____no_output_____
###Markdown
これでデプロイする準備ができました。コンテナーに **diabetes-service**.という名前のサービスをデプロイします。デプロイ プロセスには、次のステップが含まれます。
1.モデルの読み込みと使用に必要なスコアリング ファイルと環境ファイルを含む推論構成を定義します。
2.サービスをホストする実行環境を定義するデプロイメント構成を定義します。この場合、Azure Container Instances。
3.モデルを Web サービスとしてデプロイする
4.デプロイされたサービスの状態を確認します。
> **詳細情報**: モデル デプロイ、ターゲット実行環境のオプションの詳細については、[ドキュメント](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where)を参照してください。
デプロイは、最初にコンテナー イメージを作成するプロセスを実行し、そのイメージに基づいて Web サービスを作成するプロセスを実行するため、時間がかかります。デプロイが正常に完了すると、**正常**な状態が表示されます。
###Code
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# スコアリング環境を構成する
inference_config = InferenceConfig(runtime= "python",
entry_script=script_file,
conda_file=env_file)
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
###Output
_____no_output_____
###Markdown
うまくいけば、デプロイが成功し、**正常**な状態を確認できます。確認できない場合は、次のコードを使用して、トラブルシューティングに役立つサービス ログを取得できます。
###Code
print(service.get_logs())
# 変更を行って再デプロイする必要がある場合は、次のコードを使用して異常なサービスを削除することが必要となる可能性があります。
#service.delete()
###Output
_____no_output_____
###Markdown
[Azure Machine Learning Studio](https://ml.azure.com) でワークスペースを確認し、ワークスペースにデプロイされたサービスを示す**エンドポイント**ページを表示します。
次のコードを実行して、ワークスペース内の Web サービスの名前を取得することもできます。
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
_____no_output_____
###Markdown
Web サービスを使用する
サービスをデプロイしたら、クライアント アプリケーションからサービスを使用できます。
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# JSON ドキュメントでシリアル化可能なリストに配列を変換する
input_json = json.dumps({"data": x_new})
# 入力データを渡して Web サービスを呼び出す (Web サービスはバイナリ形式のデータも受け入れます)
predictions = service.run(input_data = input_json)
# 予測されたクラスを取得する - それは最初の (そして唯一の) クラスになります。
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
_____no_output_____
###Markdown
また、複数の患者の観察をサービスに送信し、それぞれの予測を取得することもできます。
###Code
import json
# 今回の入力は、2 つの特徴配列のひとつです。
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# JSON ドキュメント内のシリアル化可能なリストに配列を変換する
input_json = json.dumps({"data": x_new})
# Web サービスを呼び出して入力データを渡す
predictions = service.run(input_data = input_json)
# 予測されたクラスを取得する
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
上記のコードでは、Azure Machine Learning SDK を使用してコンテナー化された Web サービスに接続し、それを使用して糖尿病分類モデルから予測を生成しています。運用環境では、Azure Machine Learning SDK を使用せず、単に Web サービスに HTTP 要求を行うビジネス アプリケーションによってモデルが使用される可能性があります。
これらのアプリケーションが要求を送信する必要がある URL を決定しましょう。
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
_____no_output_____
###Markdown
エンドポイント URI がわかったので、アプリケーションは HTTP 要求を行い、患者データを JSON 形式で送信し、予測されたクラスを受け取ることができます。
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# JSON ドキュメントでシリアル化可能なリストに配列を変換する
input_json = json.dumps({"data": x_new})
# コンテンツ タイプを設定する
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
認証を必要としない Azure Container Instances (ACI) サービスとして Web サービスをデプロイしました。これは開発とテストには適していますが、運用環境では Azure Kubernetes Service (AKS) クラスターへのデプロイとトークンベースの認証の有効化を検討する必要があります。これには、**Authorization** ヘッダーを含める REST 要求が必要です。
サービスを削除する
サービスが不要になった場合は、不要な料金が発生しないように削除する必要があります。
###Code
service.delete()
print ('Service deleted.')
###Output
_____no_output_____
###Markdown
Create a real-time inferencing serviceAfter training a predictive model, you can deploy it as a real-time service that clients can use to get predictions from new data. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.22.0 to work with dp100
###Markdown
Train and register a modelNow let's train and register a model.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
Starting experiment: mslearn-train-diabetes
Loading Data...
Training a decision tree model
Accuracy: 0.8893333333333333
AUC: 0.8766008259117368
Model trained and registered.
###Markdown
Deploy the model as a web serviceYou have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.First, let's determine what models you have registered in the workspace.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 7
Training context : Inline Training
AUC : 0.8766008259117368
Accuracy : 0.8893333333333333
diabetes_model version: 6
Training context : Pipeline
AUC : 0.88260340417324
Accuracy : 0.898
diabetes_model version: 5
Training context : Parameterized script
AUC : 0.8484357430717946
Accuracy : 0.774
diabetes_model version: 4
Training context : Script
AUC : 0.8483203144435048
Accuracy : 0.774
diabetes_model version: 3
Training context : Compute cluster
AUC : 0.8859198496550613
Accuracy : 0.9008888888888889
diabetes_model version: 2
Training context : File dataset
AUC : 0.8568743524381947
Accuracy : 0.7891111111111111
diabetes_model version: 1
Training context : Tabular dataset
AUC : 0.8568509052814499
Accuracy : 0.7891111111111111
amlstudio-designer-predict-dia version: 1
CreatedByAMLStudio : true
AutoMLd7268af350 version: 1
###Markdown
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
diabetes_model version 7
###Markdown
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
###Code
import os
folder_name = 'diabetes_service'
# Create a folder for the web service files
experiment_folder = './' + folder_name
os.makedirs(experiment_folder, exist_ok=True)
print(folder_name, 'folder created.')
# Set path for scoring script
script_file = os.path.join(experiment_folder,"score_diabetes.py")
###Output
diabetes_service folder created.
###Markdown
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* (often called a *scoring script*) that will be deployed to the web service:
###Code
%%writefile $script_file
import json
import joblib
import numpy as np
from azureml.core.model import Model
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
Writing ./diabetes_service/score_diabetes.py
###Markdown
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires **scikit-learn**, so we'll create a .yml file that tells the container host to install this into the environment.
###Code
from azureml.core.conda_dependencies import CondaDependencies
# Add the dependencies for our model (AzureML defaults is already included)
myenv = CondaDependencies()
myenv.add_conda_package('scikit-learn')
# Save the environment config as a .yml file
env_file = os.path.join(experiment_folder,"diabetes_env.yml")
with open(env_file,"w") as f:
f.write(myenv.serialize_to_string())
print("Saved dependency info in", env_file)
# Print the .yml file
with open(env_file,"r") as f:
print(f.read())
###Output
Saved dependency info in ./diabetes_service/diabetes_env.yml
# Conda environment specification. The dependencies defined in this file will
# be automatically provisioned for runs with userManagedDependencies=False.
# Details about the Conda environment file format:
# https://conda.io/docs/user-guide/tasks/manage-environments.html#create-env-file-manually
name: project_environment
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-defaults
- scikit-learn
channels:
- anaconda
- conda-forge
###Markdown
Now you're ready to deploy. We'll deploy the container a service named **diabetes-service**. The deployment process includes the following steps:1. Define an inference configuration, which includes the scoring and environment files required to load and use the model.2. Define a deployment configuration that defines the execution environment in which the service will be hosted. In this case, an Azure Container Instance.3. Deploy the model as a web service.4. Verify the status of the deployed service.> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
###Code
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# Configure the scoring environment
inference_config = InferenceConfig(runtime= "python",
entry_script=script_file,
conda_file=env_file)
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
###Output
Tips: You can try get_logs(): https://aka.ms/debugimage#dockerlog or local deployment: https://aka.ms/debugimage#debug-locally to debug if deployment takes longer than 10 minutes.
Running.............................................................................................................................................
Succeeded
ACI service creation operation finished, operation "Succeeded"
Healthy
###Markdown
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to get the service logs to help you troubleshoot.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
2021-03-31T05:02:31,701330900+00:00 - gunicorn/run
2021-03-31T05:02:31,706130700+00:00 - iot-server/run
2021-03-31T05:02:31,715855000+00:00 - rsyslog/run
2021-03-31T05:02:31,787972600+00:00 - nginx/run
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libcrypto.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libcrypto.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
EdgeHubConnectionString and IOTEDGE_IOTHUBHOSTNAME are not set. Exiting...
2021-03-31T05:02:32,305675700+00:00 - iot-server/finish 1 0
2021-03-31T05:02:32,311150300+00:00 - Exit code 1 is normal. Not restarting iot-server.
Starting gunicorn 19.9.0
Listening at: http://127.0.0.1:31311 (68)
Using worker: sync
worker timeout is set to 300
Booting worker with pid: 99
SPARK_HOME not set. Skipping PySpark Initialization.
Initializing logger
2021-03-31 05:02:34,756 | root | INFO | Starting up app insights client
2021-03-31 05:02:34,756 | root | INFO | Starting up request id generator
2021-03-31 05:02:34,756 | root | INFO | Starting up app insight hooks
2021-03-31 05:02:34,757 | root | INFO | Invoking user's init function
2021-03-31 05:02:35,544 | root | INFO | Users's init has completed successfully
/azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/python3.6/site-packages/sklearn/base.py:334: UserWarning: Trying to unpickle estimator DecisionTreeClassifier from version 0.22.2.post1 when using version 0.23.2. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
2021-03-31 05:02:35,549 | root | INFO | Skipping middleware: dbg_model_info as it's not enabled.
2021-03-31 05:02:35,549 | root | INFO | Skipping middleware: dbg_resource_usage as it's not enabled.
2021-03-31 05:02:35,550 | root | INFO | Scoring timeout is found from os.environ: 60000 ms
2021-03-31 05:02:45,065 | root | INFO | Swagger file not present
2021-03-31 05:02:45,065 | root | INFO | 404
127.0.0.1 - - [31/Mar/2021:05:02:45 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
2021-03-31 05:02:52,127 | root | INFO | Swagger file not present
2021-03-31 05:02:52,127 | root | INFO | 404
127.0.0.1 - - [31/Mar/2021:05:02:52 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
2021-03-31 05:02:54,084 | root | INFO | Swagger file not present
2021-03-31 05:02:54,085 | root | INFO | 404
127.0.0.1 - - [31/Mar/2021:05:02:54 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
###Markdown
Take a look at your workspace in [Azure Machine Learning Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.You can also retrieve the names of web services in your workspace by running the following code:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
diabetes-service
###Markdown
Use the web serviceWith the service deployed, now you can consume it from a client application.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
Patient: [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22]
diabetic
###Markdown
You can also send multiple patient observations to the service, and get back a prediction for each one.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
The code above uses the Azure Machine Learning SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure Machine Learning SDK, but simply make HTTP requests to the web service.Let's determine the URL to which these applications must submit their requests:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
http://ae85fba9-29b1-4fdb-b3e9-93e99ffbeeb2.eastus.azurecontainer.io/score
###Markdown
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON format, and receive back the predicted class(es).
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling token-based authentication. This would require REST requests to include an **Authorization** header. Delete the serviceWhen you no longer need your service, you should delete it to avoid incurring unecessary charges.
###Code
service.delete()
print ('Service deleted.')
###Output
Service deleted.
###Markdown
Create a real-time inferencing serviceAfter training a predictive model, you can deploy it as a real-time service that clients can use to get predictions from new data. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Train and register a modelNow let's train and register a model.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
_____no_output_____
###Markdown
Deploy the model as a web serviceYou have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.First, let's determine what models you have registered in the workspace.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
_____no_output_____
###Markdown
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
###Code
import os
folder_name = 'diabetes_service'
# Create a folder for the web service files
experiment_folder = './' + folder_name
os.makedirs(experiment_folder, exist_ok=True)
print(folder_name, 'folder created.')
# Set path for scoring script
script_file = os.path.join(experiment_folder,"score_diabetes.py")
###Output
_____no_output_____
###Markdown
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* (often called a *scoring script*) that will be deployed to the web service:
###Code
%%writefile $script_file
import json
import joblib
import numpy as np
from azureml.core.model import Model
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
_____no_output_____
###Markdown
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires **scikit-learn**, so we'll create a .yml file that tells the container host to install this into the environment.
###Code
from azureml.core.conda_dependencies import CondaDependencies
# Add the dependencies for our model (AzureML defaults is already included)
myenv = CondaDependencies()
myenv.add_conda_package('scikit-learn')
# Save the environment config as a .yml file
env_file = os.path.join(experiment_folder,"diabetes_env.yml")
with open(env_file,"w") as f:
f.write(myenv.serialize_to_string())
print("Saved dependency info in", env_file)
# Print the .yml file
with open(env_file,"r") as f:
print(f.read())
###Output
_____no_output_____
###Markdown
Now you're ready to deploy. We'll deploy the container a service named **diabetes-service**. The deployment process includes the following steps:1. Define an inference configuration, which includes the scoring and environment files required to load and use the model.2. Define a deployment configuration that defines the execution environment in which the service will be hosted. In this case, an Azure Container Instance.3. Deploy the model as a web service.4. Verify the status of the deployed service.> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
###Code
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# Configure the scoring environment
inference_config = InferenceConfig(runtime= "python",
entry_script=script_file,
conda_file=env_file)
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
###Output
_____no_output_____
###Markdown
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to get the service logs to help you troubleshoot.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
_____no_output_____
###Markdown
Take a look at your workspace in [Azure Machine Learning Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.You can also retrieve the names of web services in your workspace by running the following code:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
_____no_output_____
###Markdown
Use the web serviceWith the service deployed, now you can consume it from a client application.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
_____no_output_____
###Markdown
You can also send multiple patient observations to the service, and get back a prediction for each one.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
The code above uses the Azure Machine Learning SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure Machine Learning SDK, but simply make HTTP requests to the web service.Let's determine the URL to which these applications must submit their requests:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
_____no_output_____
###Markdown
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON format, and receive back the predicted class(es).
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling token-based authentication. This would require REST requests to include an **Authorization** header. Delete the serviceWhen you no longer need your service, you should delete it to avoid incurring unecessary charges.
###Code
service.delete()
print ('Service deleted.')
###Output
_____no_output_____
###Markdown
Create a real-time inferencing serviceAfter training a predictive model, you can deploy it as a real-time service that clients can use to get predictions from new data. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.22.0 to work with dp100_ml
###Markdown
Train and register a modelNow let's train and register a model.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
_____no_output_____
###Markdown
Deploy the model as a web serviceYou have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.First, let's determine what models you have registered in the workspace.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 8
Training context : Inline Training
AUC : 0.8823125528633264
Accuracy : 0.894
diabetes_model version: 7
Training context : Pipeline
AUC : 0.8863896775883228
Accuracy : 0.9
diabetes_model version: 6
Training context : Compute cluster
AUC : 0.8852500572906943
Accuracy : 0.9
diabetes_model version: 5
Training context : Compute cluster
AUC : 0.8852500572906943
Accuracy : 0.9
diabetes_model version: 4
Training context : File dataset
AUC : 0.8568743524381947
Accuracy : 0.7891111111111111
diabetes_model version: 3
Training context : Tabular dataset
AUC : 0.8568509052814499
Accuracy : 0.7891111111111111
diabetes_model version: 2
Training context : Parameterized script
AUC : 0.8484357430717946
Accuracy : 0.774
diabetes_model version: 1
Training context : Script
AUC : 0.8483203144435048
Accuracy : 0.774
amlstudio-designer-predict-dia version: 2
CreatedByAMLStudio : true
amlstudio-designer-predict-dia version: 1
CreatedByAMLStudio : true
AutoMLafb0d63c21 version: 1
###Markdown
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
diabetes_model version 8
###Markdown
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
###Code
import os
folder_name = 'diabetes_service'
# Create a folder for the web service files
experiment_folder = './' + folder_name
os.makedirs(experiment_folder, exist_ok=True)
print(folder_name, 'folder created.')
# Set path for scoring script
script_file = os.path.join(experiment_folder,"score_diabetes.py")
###Output
diabetes_service folder created.
###Markdown
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* (often called a *scoring script*) that will be deployed to the web service:
###Code
%%writefile $script_file
import json
import joblib
import numpy as np
from azureml.core.model import Model
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
Writing ./diabetes_service/score_diabetes.py
###Markdown
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires **scikit-learn**, so we'll create a .yml file that tells the container host to install this into the environment.
###Code
from azureml.core.conda_dependencies import CondaDependencies
# Add the dependencies for our model (AzureML defaults is already included)
myenv = CondaDependencies()
myenv.add_conda_package('scikit-learn')
# Save the environment config as a .yml file
env_file = os.path.join(experiment_folder,"diabetes_env.yml")
with open(env_file,"w") as f:
f.write(myenv.serialize_to_string())
print("Saved dependency info in", env_file)
# Print the .yml file
with open(env_file,"r") as f:
print(f.read())
###Output
Saved dependency info in ./diabetes_service/diabetes_env.yml
# Conda environment specification. The dependencies defined in this file will
# be automatically provisioned for runs with userManagedDependencies=False.
# Details about the Conda environment file format:
# https://conda.io/docs/user-guide/tasks/manage-environments.html#create-env-file-manually
name: project_environment
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-defaults
- scikit-learn
channels:
- anaconda
- conda-forge
###Markdown
Now you're ready to deploy. We'll deploy the container a service named **diabetes-service**. The deployment process includes the following steps:1. Define an inference configuration, which includes the scoring and environment files required to load and use the model.2. Define a deployment configuration that defines the execution environment in which the service will be hosted. In this case, an Azure Container Instance.3. Deploy the model as a web service.4. Verify the status of the deployed service.> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
###Code
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# Configure the scoring environment
inference_config = InferenceConfig(runtime= "python",
entry_script=script_file,
conda_file=env_file)
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
###Output
Tips: You can try get_logs(): https://aka.ms/debugimage#dockerlog or local deployment: https://aka.ms/debugimage#debug-locally to debug if deployment takes longer than 10 minutes.
Running............................................................................................................
Succeeded
ACI service creation operation finished, operation "Succeeded"
Healthy
###Markdown
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to get the service logs to help you troubleshoot.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
2021-03-16T20:25:12,689596100+00:00 - gunicorn/run
2021-03-16T20:25:12,701926200+00:00 - rsyslog/run
2021-03-16T20:25:12,711181600+00:00 - iot-server/run
2021-03-16T20:25:12,729356600+00:00 - nginx/run
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libcrypto.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libcrypto.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
EdgeHubConnectionString and IOTEDGE_IOTHUBHOSTNAME are not set. Exiting...
2021-03-16T20:25:14,554849800+00:00 - iot-server/finish 1 0
2021-03-16T20:25:14,560734500+00:00 - Exit code 1 is normal. Not restarting iot-server.
Starting gunicorn 19.9.0
Listening at: http://127.0.0.1:31311 (69)
Using worker: sync
worker timeout is set to 300
Booting worker with pid: 97
SPARK_HOME not set. Skipping PySpark Initialization.
Initializing logger
2021-03-16 20:25:20,124 | root | INFO | Starting up app insights client
2021-03-16 20:25:20,125 | root | INFO | Starting up request id generator
2021-03-16 20:25:20,125 | root | INFO | Starting up app insight hooks
2021-03-16 20:25:20,126 | root | INFO | Invoking user's init function
2021-03-16 20:25:22,976 | root | INFO | Users's init has completed successfully
/azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/python3.6/site-packages/sklearn/base.py:334: UserWarning: Trying to unpickle estimator DecisionTreeClassifier from version 0.22.2.post1 when using version 0.23.2. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
2021-03-16 20:25:22,988 | root | INFO | Skipping middleware: dbg_model_info as it's not enabled.
2021-03-16 20:25:22,988 | root | INFO | Skipping middleware: dbg_resource_usage as it's not enabled.
2021-03-16 20:25:22,997 | root | INFO | Scoring timeout is found from os.environ: 60000 ms
2021-03-16 20:25:24,018 | root | INFO | Swagger file not present
2021-03-16 20:25:24,020 | root | INFO | 404
127.0.0.1 - - [16/Mar/2021:20:25:24 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
2021-03-16 20:25:29,424 | root | INFO | Swagger file not present
2021-03-16 20:25:29,424 | root | INFO | 404
127.0.0.1 - - [16/Mar/2021:20:25:29 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
###Markdown
Take a look at your workspace in [Azure Machine Learning Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.You can also retrieve the names of web services in your workspace by running the following code:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
diabetes-service
###Markdown
Use the web serviceWith the service deployed, now you can consume it from a client application.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
Patient: [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22]
diabetic
###Markdown
You can also send multiple patient observations to the service, and get back a prediction for each one.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
The code above uses the Azure Machine Learning SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure Machine Learning SDK, but simply make HTTP requests to the web service.Let's determine the URL to which these applications must submit their requests:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
http://31dcfe09-d1ea-4995-9f44-6ac401be7274.northcentralus.azurecontainer.io/score
###Markdown
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON format, and receive back the predicted class(es).
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling token-based authentication. This would require REST requests to include an **Authorization** header. Delete the serviceWhen you no longer need your service, you should delete it to avoid incurring unecessary charges.
###Code
service.delete()
print ('Service deleted.')
###Output
Service deleted.
###Markdown
リアルタイム推論サービスを作成する
予測モデルのトレーニング後、クライアントが新しいデータから予測を取得するために使用できるリアルタイム サービスとしてモデルをデプロイできます。 ワークスペースに接続する
作業を開始するには、ワークスペースに接続します。
> **注**: Azure サブスクリプションでまだ認証済みのセッションを確立していない場合は、リンクをクリックして認証コードを入力し、Azure にサインインして認証するよう指示されます。
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
モデルをトレーニングして登録する
それでは、モデルをトレーニングして登録しましょう。
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
_____no_output_____
###Markdown
モデルを Web サービスとして公開する
糖尿病の可能性に基づいて患者を分類する機械学習モデルをトレーニングし、登録しました。このモデルは、糖尿病の臨床検査を受ける必要があるとリスクがあると考えられる患者のみが必要な医師の手術などの運用環境で使用できます。このシナリオをサポートするには、モデルを Web サービスとしてデプロイします。
まず、ワークスペースに登録したモデルを決定しましょう。
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
それでは、デプロイしたいモデルを取得しましょう。既定では、モデル名を指定すると、最新バージョンが返されます。
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
_____no_output_____
###Markdown
このモデルをホストする Web サービスを作成しますが、これにはコードと構成ファイルが必要です。そのため、それらのフォルダーを作成してみましょう。
###Code
import os
# Create a folder for the deployment files
deployment_folder = './diabetes_service'
os.makedirs(deployment_folder, exist_ok=True)
print(deployment_folder, 'folder created.')
# Set path for scoring script
script_file = 'score_diabetes.py'
script_path = os.path.join(deployment_folder,script_file)
###Output
_____no_output_____
###Markdown
モデルをデプロイする Web サービスでは、入力データを読み込み、ワークスペースからモデルを取得し、予測を生成して返すために、Python コードが必要になります。このコードは、Web サービスにデプロイされる*エントリ スクリプト* (頻繁に*スコアリング スクリプト*と呼ばれます) に保存します。
スクリプトは 2 つの関数で構成されています。
- **init**: この関数は、サービスが初期化されるときに呼び出され、通常、モデルをロードするために使用されます。スコアリング スクリプトは、**AZUREML_MODEL_DIR** 環境変数を使用して、モデルが保存されているフォルダーを決定することに注意してください。
- **run**: この関数は、クライアント アプリケーションが新しいデータを送信するたびに呼び出され、通常、モデルから予測を推測するために使用されます。
###Code
%%writefile $script_path
import json
import joblib
import numpy as np
import os
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'diabetes_model.pkl')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
_____no_output_____
###Markdown
Web サービスはコンテナーでホストされ、コンテナーは初期化されるときに必要な Python 依存関係をインストールする必要があります。この場合、スコアリング コードには、スコアリング Web サービスで使用される **scikit-learn** といくつかの Azure Machine Learning 固有のパッケージが必要なので、これらを含む環境を作成します。次に、その環境をスコアリング スクリプトとともに*推論構成*に追加し、環境とスクリプトがホストされるコンテナーの*デプロイ構成*を定義します。
次に、これらの構成に基づいてモデルをサービスとしてデプロイできます。
> **詳細情報**: モデル デプロイ、ターゲット実行環境のオプションの詳細については、[ドキュメント](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where)を参照してください。
デプロイは、最初にコンテナー イメージを作成するプロセスを実行し、そのイメージに基づいて Web サービスを作成するプロセスを実行するため、時間がかかります。デプロイが正常に完了すると、**正常**な状態が表示されます。
###Code
from azureml.core import Environment
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
# Configure the scoring environment
service_env = Environment(name='service-env')
python_packages = ['scikit-learn', 'azureml-defaults', 'azure-ml-api-sdk']
for package in python_packages:
service_env.python.conda_dependencies.add_pip_package(package)
inference_config = InferenceConfig(source_directory=deployment_folder,
entry_script=script_file,
environment=service_env)
# Configure the web service container
deployment_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)
# Deploy the model as a service
print('Deploying model...')
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config, overwrite=True)
service.wait_for_deployment(True)
print(service.state)
###Output
_____no_output_____
###Markdown
うまくいけば、デプロイが成功し、**正常**な状態を確認できます。確認できない場合は、次のコードを使用して、トラブルシューティングに役立つサービス ログを取得できます。
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
_____no_output_____
###Markdown
[Azure Machine Learning Studio](https://ml.azure.com) でワークスペースを確認し、ワークスペースにデプロイされたサービスを示す**エンドポイント**ページを表示します。
次のコードを実行して、ワークスペース内の Web サービスの名前を取得することもできます。
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
_____no_output_____
###Markdown
Web サービスを使用する
サービスをデプロイしたら、クライアント アプリケーションからサービスを使用できます。
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
_____no_output_____
###Markdown
また、複数の患者の観察をサービスに送信し、それぞれの予測を取得することもできます。
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
上記のコードでは、Azure Machine Learning SDK を使用してコンテナー化された Web サービスに接続し、それを使用して糖尿病分類モデルから予測を生成しています。運用環境では、Azure Machine Learning SDK を使用せず、単に Web サービスに HTTP 要求を行うビジネス アプリケーションによってモデルが使用される可能性があります。
これらのアプリケーションが要求を送信する必要がある URL を決定しましょう。
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
_____no_output_____
###Markdown
エンドポイント URI がわかったので、アプリケーションは HTTP 要求を行い、患者データを JSON 形式で送信し、予測されたクラスを受け取ることができます。
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
認証を必要としない Azure Container Instances (ACI) サービスとして Web サービスをデプロイしました。これは開発とテストには適していますが、運用環境では Azure Kubernetes Service (AKS) クラスターへのデプロイとトークンベースの認証の有効化を検討する必要があります。これには、**Authorization** ヘッダーを含める REST 要求が必要です。
サービスを削除する
サービスが不要になった場合は、不要な料金が発生しないように削除する必要があります。
###Code
service.delete()
print ('Service deleted.')
###Output
_____no_output_____
###Markdown
Create a real-time inferencing serviceAfter training a predictive model, you can deploy it as a real-time service that clients can use to get predictions from new data. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.22.0 to work with mba_dp100
###Markdown
Train and register a modelNow let's train and register a model.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="09_mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('users/mikkel.ahlgren/data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
Starting experiment: 09_mslearn-train-diabetes
Loading Data...
Training a decision tree model
Accuracy: 0.8866666666666667
AUC: 0.874834568884024
Model trained and registered.
###Markdown
Deploy the model as a web serviceYou have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.First, let's determine what models you have registered in the workspace.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 8
Training context : Inline Training
AUC : 0.874834568884024
Accuracy : 0.8866666666666667
diabetes_model version: 7
Training context : Inline Training
AUC : 0.8751156773968853
Accuracy : 0.8883333333333333
diabetes_model version: 6
Training context : Pipeline
AUC : 0.8821010598999647
Accuracy : 0.8973333333333333
diabetes_model version: 5
Training context : Compute cluster
AUC : 0.8811103069277058
Accuracy : 0.8966666666666666
diabetes_model version: 4
Training context : File dataset
AUC : 0.8568743524381947
Accuracy : 0.7891111111111111
06_diabetes_model.pkl version: 1
Training context : Tabular dataset
AUC : 0.8568509052814499
Accuracy : 0.7891111111111111
diabetes_model version: 3
Training context : Parameterized script
AUC : 0.8482685705756505
Accuracy : 0.7736666666666666
diabetes_model version: 2
Training context : Parameterized script
AUC : 0.8483014080302502
Accuracy : 0.774
diabetes_model version: 1
Training context : Script
AUC : 0.8483203144435048
Accuracy : 0.774
###Markdown
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
diabetes_model version 8
###Markdown
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
###Code
import os
folder_name = '09_diabetes_service'
# Create a folder for the web service files
experiment_folder = 'users/mikkel.ahlgren/' + folder_name
os.makedirs(experiment_folder, exist_ok=True)
print(folder_name, 'folder created.')
# Set path for scoring script
script_file = os.path.join(experiment_folder,"score_diabetes.py")
###Output
09_diabetes_service folder created.
###Markdown
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* (often called a *scoring script*) that will be deployed to the web service:
###Code
%%writefile $script_file
import json
import joblib
import numpy as np
from azureml.core.model import Model
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
Writing users/mikkel.ahlgren/09_diabetes_service/score_diabetes.py
###Markdown
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires **scikit-learn**, so we'll create a .yml file that tells the container host to install this into the environment.
###Code
from azureml.core.conda_dependencies import CondaDependencies
# Add the dependencies for our model (AzureML defaults is already included)
myenv = CondaDependencies()
myenv.add_conda_package('scikit-learn')
# Save the environment config as a .yml file
env_file = os.path.join(experiment_folder,"diabetes_env.yml")
with open(env_file,"w") as f:
f.write(myenv.serialize_to_string())
print("Saved dependency info in", env_file)
# Print the .yml file
with open(env_file,"r") as f:
print(f.read())
###Output
Saved dependency info in users/mikkel.ahlgren/09_diabetes_service/diabetes_env.yml
# Conda environment specification. The dependencies defined in this file will
# be automatically provisioned for runs with userManagedDependencies=False.
# Details about the Conda environment file format:
# https://conda.io/docs/user-guide/tasks/manage-environments.html#create-env-file-manually
name: project_environment
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-defaults
- scikit-learn
channels:
- anaconda
- conda-forge
###Markdown
Now you're ready to deploy. We'll deploy the container a service named **diabetes-service**. The deployment process includes the following steps:1. Define an inference configuration, which includes the scoring and environment files required to load and use the model.2. Define a deployment configuration that defines the execution environment in which the service will be hosted. In this case, an Azure Container Instance.3. Deploy the model as a web service.4. Verify the status of the deployed service.> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
###Code
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# Configure the scoring environment
inference_config = InferenceConfig(runtime= "python",
entry_script=script_file,
conda_file=env_file)
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
###Output
Tips: You can try get_logs(): https://aka.ms/debugimage#dockerlog or local deployment: https://aka.ms/debugimage#debug-locally to debug if deployment takes longer than 10 minutes.
Running......................................................................................................................
Succeeded
ACI service creation operation finished, operation "Succeeded"
Healthy
###Markdown
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to get the service logs to help you troubleshoot.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
2021-04-13T06:18:03,467761800+00:00 - rsyslog/run
2021-04-13T06:18:03,469056100+00:00 - gunicorn/run
2021-04-13T06:18:03,487372600+00:00 - iot-server/run
2021-04-13T06:18:03,507235000+00:00 - nginx/run
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libcrypto.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libcrypto.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
EdgeHubConnectionString and IOTEDGE_IOTHUBHOSTNAME are not set. Exiting...
2021-04-13T06:18:03,834595900+00:00 - iot-server/finish 1 0
2021-04-13T06:18:03,840556700+00:00 - Exit code 1 is normal. Not restarting iot-server.
Starting gunicorn 19.9.0
Listening at: http://127.0.0.1:31311 (71)
Using worker: sync
worker timeout is set to 300
Booting worker with pid: 98
SPARK_HOME not set. Skipping PySpark Initialization.
Initializing logger
2021-04-13 06:18:06,057 | root | INFO | Starting up app insights client
2021-04-13 06:18:06,058 | root | INFO | Starting up request id generator
2021-04-13 06:18:06,058 | root | INFO | Starting up app insight hooks
2021-04-13 06:18:06,058 | root | INFO | Invoking user's init function
2021-04-13 06:18:06,766 | root | INFO | Users's init has completed successfully
/azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/python3.6/site-packages/sklearn/base.py:334: UserWarning: Trying to unpickle estimator DecisionTreeClassifier from version 0.24.1 when using version 0.23.2. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
2021-04-13 06:18:06,769 | root | INFO | Skipping middleware: dbg_model_info as it's not enabled.
2021-04-13 06:18:06,769 | root | INFO | Skipping middleware: dbg_resource_usage as it's not enabled.
2021-04-13 06:18:06,769 | root | INFO | Scoring timeout is found from os.environ: 60000 ms
2021-04-13 06:18:14,147 | root | INFO | Swagger file not present
2021-04-13 06:18:14,148 | root | INFO | 404
127.0.0.1 - - [13/Apr/2021:06:18:14 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
2021-04-13 06:18:20,200 | root | INFO | Swagger file not present
2021-04-13 06:18:20,200 | root | INFO | 404
127.0.0.1 - - [13/Apr/2021:06:18:20 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
###Markdown
Take a look at your workspace in [Azure Machine Learning Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.You can also retrieve the names of web services in your workspace by running the following code:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
_____no_output_____
###Markdown
Use the web serviceWith the service deployed, now you can consume it from a client application.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
Patient: [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22]
diabetic
###Markdown
You can also send multiple patient observations to the service, and get back a prediction for each one.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json) #service er modellen, run() er funksjonen du opprettet. run(raw_data)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
The code above uses the Azure Machine Learning SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure Machine Learning SDK, but simply make HTTP requests to the web service.Let's determine the URL to which these applications must submit their requests:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
http://05aa5105-ab01-471d-bd38-7b685595305e.northeurope.azurecontainer.io/score
###Markdown
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON format, and receive back the predicted class(es).
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling token-based authentication. This would require REST requests to include an **Authorization** header. Delete the serviceWhen you no longer need your service, you should delete it to avoid incurring unecessary charges.
###Code
service.delete()
print ('Service deleted.')
###Output
Service deleted.
###Markdown
创建实时推理服务
在训练了预测模型之后,可以将其部署为实时服务,客户端可以使用该服务从新数据中获取预测。 连接到工作区
首先,请连接到你的工作区。
> **备注**:如果尚未与 Azure 订阅建立经过身份验证的会话,则系统将提示你通过执行以下操作进行身份验证:单击链接,输入验证码,然后登录到 Azure。
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
训练和注册模型
现在我们来训练并注册模型。
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
_____no_output_____
###Markdown
将模型部署为 Web 服务
你已经训练并注册了一个机器学习模型,该模型可以根据患者患上糖尿病的可能性对他们进行分类。此模型可在生产环境中使用,例如医生的手术室(在此场景中,只有被认为有风险的患者需要进行糖尿病临床测试)。为了支持此场景,你需要将模型部署为 Web 服务。
首先,让我们确定你在工作区中注册了哪些模型。
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
现在我们来获取要部署的模型。默认情况下,如果我们指定模型名称,将返回最新版本。
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
_____no_output_____
###Markdown
我们将创建一个 Web 服务来托管此模型,这将需要一些代码和配置文件;因此,我们先为它们创建一个文件夹。
###Code
import os
# Create a folder for the deployment files
deployment_folder = './diabetes_service'
os.makedirs(deployment_folder, exist_ok=True)
print(deployment_folder, 'folder created.')
# Set path for scoring script
script_file = 'score_diabetes.py'
script_path = os.path.join(deployment_folder,script_file)
###Output
_____no_output_____
###Markdown
我们在其中部署模型的 Web 服务将需要一些 Python 代码来加载输入数据、从工作区获取模型以及生成和返回预测。我们将此代码保存在将部署到 Web 服务的入口脚本(通常称为评分脚本)中。
脚本由两个函数组成:
- **init**:在初始化服务时调用该函数,通常用于加载模型。注意,评分脚本使用“**AZUREML_MODEL_DIR**”环境变量来确定存储模型的文件夹。
- **run**:每次客户端应用程序提交新数据时都会调用此函数,通常用于从模型推断预测。
###Code
%%writefile $script_path
import json
import joblib
import numpy as np
import os
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'diabetes_model.pkl')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
_____no_output_____
###Markdown
该 Web 服务将托管在容器中,该容器在进行初始化时需要安装任何所需的 Python 依赖项。在本例中,评分代码需要“**scikit-learn**”和评分 Web 服务使用的一些特定于 Azure 机器学习的包,因此我们将创建一个包含这些内容的环境。然后,我们将把该环境和评分脚本一起添加到“推理配置”中,并为容器定义**“部署配置**”,环境和脚本将驻留在容器中。
然后,我们可以基于这些配置将模型部署为服务。
> **详细信息**:有关模型部署的更多详细信息以及目标执行环境选项,请参阅此[文档](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where)。
部署会花费一些时间,因为它首先运行一个进程以创建容器映像,然后会运行一个进程以基于该映像创建 Web 服务。成功完成部署后,你会看到“**正常**”状态。
###Code
from azureml.core import Environment
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
# Configure the scoring environment
service_env = Environment(name='service-env')
python_packages = ['scikit-learn', 'azureml-defaults', 'azure-ml-api-sdk']
for package in python_packages:
service_env.python.conda_dependencies.add_pip_package(package)
inference_config = InferenceConfig(source_directory=deployment_folder,
entry_script=script_file,
environment=service_env)
# Configure the web service container
deployment_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)
# Deploy the model as a service
print('Deploying model...')
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config, overwrite=True)
service.wait_for_deployment(True)
print(service.state)
###Output
_____no_output_____
###Markdown
希望部署已成功,然后你就能看到“**正常**”状态。如果未成功,可以使用以下代码来获取服务日志以帮助你解决问题。
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
_____no_output_____
###Markdown
在 [Azure 机器学习工作室](https://ml.azure.com)中查看工作区,然后查看“**终结点**”页面,此页面显示工作区中已部署的服务。
还可以通过运行以下代码来检索工作区中 Web 服务的名称:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
_____no_output_____
###Markdown
使用 Web 服务
部署此服务后,现在可以从客户端应用程序使用它。
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
_____no_output_____
###Markdown
还可以将多位患者的观察结果发送到此服务,并获取针对每位患者的预测。
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
上述代码使用 Azure 机器学习 SDK 连接到容器化 Web 服务,并将其用于根据糖尿病分类模型生成预测。在生产环境中,不使用 Azure 机器学习 SDK 而是仅向 Web 服务发出 HTTP 请求的业务应用程序可能使用模型。
我们来确定这些应用程序必须将其请求提交到的 URL:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
_____no_output_____
###Markdown
你已知道终结点 URI,应用程序现在可以发出 HTTP 请求、发送 JSON 格式的患者数据以及接收预测的类。
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
你已将 Web 服务部署为不需要进行身份验证的 Azure 容器实例 (ACI) 服务。这对于开发和测试是可行的,但是对于生产,应考虑部署到 Azure Kubernetes 服务 (AKS) 群集并启用基于令牌的身份验证。这要求 REST 请求包含一个**授权**标头。
删除服务
如果你不再需要服务,应将其删除以免产生不必要的费用。
###Code
service.delete()
print ('Service deleted.')
###Output
_____no_output_____
###Markdown
Create a real-time inferencing serviceAfter training a predictive model, you can deploy it as a real-time service that clients can use to get predictions from new data. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config(".\\Working_Files\\config.json")
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.24.0 to work with AML
###Markdown
Train and register a modelNow let's train and register a model.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
Starting experiment: mslearn-train-diabetes
Loading Data...
Training a decision tree model
Accuracy: 0.8926666666666667
AUC: 0.8803323548435243
Model trained and registered.
###Markdown
Deploy the model as a web serviceYou have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.First, let's determine what models you have registered in the workspace.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 6
Training context : Inline Training
AUC : 0.8803323548435243
Accuracy : 0.8926666666666667
diabetes_model version: 5
Training context : Pipeline
AUC : 0.886222229497231
Accuracy : 0.8997777777777778
diabetes_model version: 4
Training context : File dataset
AUC : 0.856863734857782
Accuracy : 0.7893333333333333
diabetes_model version: 3
Training context : Tabular dataset
AUC : 0.8568520112794097
Accuracy : 0.7891111111111111
diabetes_model version: 2
Training context : Parameterized script
AUC : 0.8484048957659586
Accuracy : 0.7736666666666666
diabetes_model version: 1
Training context : Script
AUC : 0.848370565699786
Accuracy : 0.774
amlstudio-predict-diabetes version: 1
CreatedByAMLStudio : true
amlstudio-predict-penguin-clus version: 1
CreatedByAMLStudio : true
amlstudio-predict-auto-price version: 1
CreatedByAMLStudio : true
AutoML829737c9d0 version: 1
###Markdown
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
diabetes_model version 6
###Markdown
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
###Code
import os
folder_name = 'diabetes_service'
# Create a folder for the web service files
experiment_folder = './' + folder_name
os.makedirs(experiment_folder, exist_ok=True)
print(folder_name, 'folder created.')
# Set path for scoring script
script_file = os.path.join(experiment_folder,"score_diabetes.py")
###Output
diabetes_service folder created.
###Markdown
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* (often called a *scoring script*) that will be deployed to the web service:
###Code
%%writefile $script_file
import json
import joblib
import numpy as np
from azureml.core.model import Model
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
Writing ./diabetes_service\score_diabetes.py
###Markdown
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires **scikit-learn**, so we'll create a .yml file that tells the container host to install this into the environment.
###Code
from azureml.core.conda_dependencies import CondaDependencies
# Add the dependencies for our model (AzureML defaults is already included)
myenv = CondaDependencies()
myenv.add_conda_package('scikit-learn')
# Save the environment config as a .yml file
env_file = os.path.join(experiment_folder,"diabetes_env.yml")
with open(env_file,"w") as f:
f.write(myenv.serialize_to_string())
print("Saved dependency info in", env_file)
# Print the .yml file
with open(env_file,"r") as f:
print(f.read())
###Output
Saved dependency info in ./diabetes_service\diabetes_env.yml
# Conda environment specification. The dependencies defined in this file will
# be automatically provisioned for runs with userManagedDependencies=False.
# Details about the Conda environment file format:
# https://conda.io/docs/user-guide/tasks/manage-environments.html#create-env-file-manually
name: project_environment
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-defaults
- scikit-learn
channels:
- anaconda
- conda-forge
###Markdown
Now you're ready to deploy. We'll deploy the container a service named **diabetes-service**. The deployment process includes the following steps:1. Define an inference configuration, which includes the scoring and environment files required to load and use the model.2. Define a deployment configuration that defines the execution environment in which the service will be hosted. In this case, an Azure Container Instance.3. Deploy the model as a web service.4. Verify the status of the deployed service.> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
###Code
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# Configure the scoring environment
inference_config = InferenceConfig(runtime= "python",
entry_script=script_file,
conda_file=env_file)
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
###Output
Tips: You can try get_logs(): https://aka.ms/debugimage#dockerlog or local deployment: https://aka.ms/debugimage#debug-locally to debug if deployment takes longer than 10 minutes.
Running
2021-03-20 06:54:18+00:00 Creating Container Registry if not exists.
2021-03-20 06:54:19+00:00 Registering the environment..
2021-03-20 06:54:36+00:00 Building image..
2021-03-20 07:01:58+00:00 Generating deployment configuration.
2021-03-20 07:01:59+00:00 Submitting deployment to compute..
2021-03-20 07:02:09+00:00 Checking the status of deployment diabetes-service..
2021-03-20 07:04:32+00:00 Checking the status of inference endpoint diabetes-service.
Succeeded
ACI service creation operation finished, operation "Succeeded"
Healthy
###Markdown
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to get the service logs to help you troubleshoot.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
2021-03-20T07:04:23,473731100+00:00 - rsyslog/run
2021-03-20T07:04:23,476052700+00:00 - iot-server/run
2021-03-20T07:04:23,508932100+00:00 - gunicorn/run
2021-03-20T07:04:23,531688400+00:00 - nginx/run
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libcrypto.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libcrypto.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
EdgeHubConnectionString and IOTEDGE_IOTHUBHOSTNAME are not set. Exiting...
2021-03-20T07:04:25,261291500+00:00 - iot-server/finish 1 0
2021-03-20T07:04:25,274558100+00:00 - Exit code 1 is normal. Not restarting iot-server.
Starting gunicorn 19.9.0
Listening at: http://127.0.0.1:31311 (71)
Using worker: sync
worker timeout is set to 300
Booting worker with pid: 99
SPARK_HOME not set. Skipping PySpark Initialization.
Initializing logger
2021-03-20 07:04:30,755 | root | INFO | Starting up app insights client
2021-03-20 07:04:30,755 | root | INFO | Starting up request id generator
2021-03-20 07:04:30,756 | root | INFO | Starting up app insight hooks
2021-03-20 07:04:30,757 | root | INFO | Invoking user's init function
2021-03-20 07:04:33,078 | root | INFO | Users's init has completed successfully
/azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/python3.6/site-packages/sklearn/base.py:334: UserWarning: Trying to unpickle estimator DecisionTreeClassifier from version 0.24.1 when using version 0.23.2. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
2021-03-20 07:04:33,089 | root | INFO | Skipping middleware: dbg_model_info as it's not enabled.
2021-03-20 07:04:33,090 | root | INFO | Skipping middleware: dbg_resource_usage as it's not enabled.
2021-03-20 07:04:33,091 | root | INFO | Scoring timeout is found from os.environ: 60000 ms
2021-03-20 07:04:33,151 | root | INFO | Swagger file not present
2021-03-20 07:04:33,152 | root | INFO | 404
127.0.0.1 - - [20/Mar/2021:07:04:33 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
2021-03-20 07:04:39,031 | root | INFO | Swagger file not present
2021-03-20 07:04:39,031 | root | INFO | 404
127.0.0.1 - - [20/Mar/2021:07:04:39 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
###Markdown
Take a look at your workspace in [Azure Machine Learning Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.You can also retrieve the names of web services in your workspace by running the following code:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
diabetes-service
predict-diabetes
predict-penguin-clusters
predict-auto-price
###Markdown
Use the web serviceWith the service deployed, now you can consume it from a client application.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
Patient: [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22]
diabetic
###Markdown
You can also send multiple patient observations to the service, and get back a prediction for each one.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
The code above uses the Azure Machine Learning SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure Machine Learning SDK, but simply make HTTP requests to the web service.Let's determine the URL to which these applications must submit their requests:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
http://b932f211-a133-4d72-84df-596babeb7bb1.westeurope.azurecontainer.io/score
###Markdown
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON format, and receive back the predicted class(es).
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling token-based authentication. This would require REST requests to include an **Authorization** header. Delete the serviceWhen you no longer need your service, you should delete it to avoid incurring unecessary charges.
###Code
service.delete()
print ('Service deleted.')
###Output
Service deleted.
###Markdown
Create a real-time inferencing serviceAfter training a predictive model, you can deploy it as a real-time service that clients can use to get predictions from new data. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.27.0 to work with aml-revision
###Markdown
Train and register a modelNow let's train and register a model.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
Starting experiment: mslearn-train-diabetes
Loading Data...
Training a decision tree model
Accuracy: 0.891
AUC: 0.8795636598835762
Model trained and registered.
###Markdown
Deploy the model as a web serviceYou have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.First, let's determine what models you have registered in the workspace.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 9
Training context : Inline Training
AUC : 0.8795636598835762
Accuracy : 0.891
diabetes_model version: 8
Training context : Pipeline
AUC : 0.8837662504280213
Accuracy : 0.8991111111111111
diabetes_model version: 7
Training context : Compute cluster
AUC : 0.8806126078458612
Accuracy : 0.8962222222222223
diabetes_model version: 6
Training context : File dataset
AUC : 0.8468331741963582
Accuracy : 0.7793333333333333
diabetes_model version: 5
Training context : Tabular dataset
AUC : 0.8568509052814499
Accuracy : 0.7891111111111111
diabetes_model version: 4
Training context : Parameterized script
AUC : 0.8483198169063138
Accuracy : 0.774
diabetes_model version: 3
Training context : Script
AUC : 0.8484929598487486
Accuracy : 0.774
diabetes_model version: 2
Training context : Parameterized script
AUC : 0.8483198169063138
Accuracy : 0.774
diabetes_model version: 1
Training context : Script
AUC : 0.8484929598487486
Accuracy : 0.774
amlstudio-designer-predict-dia version: 1
CreatedByAMLStudio : true
PipelineTrainedClass version: 1
CreatedByAMLStudio : true
AutoMLcde5d93451 version: 1
###Markdown
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
diabetes_model version 9
###Markdown
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
###Code
import os
folder_name = 'diabetes_service'
# Create a folder for the web service files
experiment_folder = './' + folder_name
os.makedirs(experiment_folder, exist_ok=True)
print(folder_name, 'folder created.')
# Set path for scoring script
script_file = os.path.join(experiment_folder,"score_diabetes.py")
###Output
diabetes_service folder created.
###Markdown
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* (often called a *scoring script*) that will be deployed to the web service:
###Code
%%writefile $script_file
import json
import joblib
import numpy as np
from azureml.core.model import Model
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
Writing ./diabetes_service/score_diabetes.py
###Markdown
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires **scikit-learn**, so we'll create a .yml file that tells the container host to install this into the environment.
###Code
from azureml.core.conda_dependencies import CondaDependencies
# Add the dependencies for our model (AzureML defaults is already included)
myenv = CondaDependencies()
myenv.add_conda_package('scikit-learn')
# Save the environment config as a .yml file
env_file = os.path.join(experiment_folder,"diabetes_env.yml")
with open(env_file,"w") as f:
f.write(myenv.serialize_to_string())
print("Saved dependency info in", env_file)
# Print the .yml file
with open(env_file,"r") as f:
print(f.read())
###Output
Saved dependency info in ./diabetes_service/diabetes_env.yml
# Conda environment specification. The dependencies defined in this file will
# be automatically provisioned for runs with userManagedDependencies=False.
# Details about the Conda environment file format:
# https://conda.io/docs/user-guide/tasks/manage-environments.html#create-env-file-manually
name: project_environment
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-defaults
- scikit-learn
channels:
- anaconda
- conda-forge
###Markdown
Now you're ready to deploy. We'll deploy the container a service named **diabetes-service**. The deployment process includes the following steps:1. Define an inference configuration, which includes the scoring and environment files required to load and use the model.2. Define a deployment configuration that defines the execution environment in which the service will be hosted. In this case, an Azure Container Instance.3. Deploy the model as a web service.4. Verify the status of the deployed service.> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
###Code
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# Configure the scoring environment
inference_config = InferenceConfig(runtime= "python",
entry_script=script_file,
conda_file=env_file)
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
###Output
Tips: You can try get_logs(): https://aka.ms/debugimage#dockerlog or local deployment: https://aka.ms/debugimage#debug-locally to debug if deployment takes longer than 10 minutes.
Running
2021-05-31 22:54:56+00:00 Creating Container Registry if not exists.
2021-05-31 22:54:56+00:00 Registering the environment.
2021-05-31 22:54:58+00:00 Building image..
2021-05-31 23:01:26+00:00 Generating deployment configuration.
2021-05-31 23:01:27+00:00 Submitting deployment to compute..
2021-05-31 23:01:30+00:00 Checking the status of deployment diabetes-service..
2021-05-31 23:02:49+00:00 Checking the status of inference endpoint diabetes-service.
Succeeded
ACI service creation operation finished, operation "Succeeded"
Healthy
###Markdown
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to get the service logs to help you troubleshoot.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
2021-05-31T23:02:40,904446500+00:00 - gunicorn/run
2021-05-31T23:02:40,916451500+00:00 - nginx/run
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libcrypto.so.1.0.0: no version information available (required by /usr/sbin/nginx)
2021-05-31T23:02:40,917322100+00:00 - rsyslog/run
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libcrypto.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
2021-05-31T23:02:40,935460800+00:00 - iot-server/run
EdgeHubConnectionString and IOTEDGE_IOTHUBHOSTNAME are not set. Exiting...
2021-05-31T23:02:41,401654300+00:00 - iot-server/finish 1 0
2021-05-31T23:02:41,404811400+00:00 - Exit code 1 is normal. Not restarting iot-server.
Starting gunicorn 20.1.0
Listening at: http://127.0.0.1:31311 (69)
Using worker: sync
worker timeout is set to 300
Booting worker with pid: 99
SPARK_HOME not set. Skipping PySpark Initialization.
Initializing logger
2021-05-31 23:02:44,269 | root | INFO | Starting up app insights client
2021-05-31 23:02:44,270 | root | INFO | Starting up request id generator
2021-05-31 23:02:44,270 | root | INFO | Starting up app insight hooks
2021-05-31 23:02:44,270 | root | INFO | Invoking user's init function
2021-05-31 23:02:45,198 | root | INFO | Users's init has completed successfully
/azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/python3.6/site-packages/sklearn/base.py:334: UserWarning: Trying to unpickle estimator DecisionTreeClassifier from version 0.22.2.post1 when using version 0.23.2. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
2021-05-31 23:02:45,203 | root | INFO | Skipping middleware: dbg_model_info as it's not enabled.
2021-05-31 23:02:45,203 | root | INFO | Skipping middleware: dbg_resource_usage as it's not enabled.
2021-05-31 23:02:45,207 | root | INFO | Scoring timeout is found from os.environ: 60000 ms
2021-05-31 23:02:49,534 | root | INFO | Swagger file not present
2021-05-31 23:02:49,536 | root | INFO | 404
127.0.0.1 - - [31/May/2021:23:02:49 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
2021-05-31 23:02:53,015 | root | INFO | Swagger file not present
2021-05-31 23:02:53,016 | root | INFO | 404
127.0.0.1 - - [31/May/2021:23:02:53 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
###Markdown
Take a look at your workspace in [Azure Machine Learning Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.You can also retrieve the names of web services in your workspace by running the following code:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
diabetes-service
###Markdown
Use the web serviceWith the service deployed, now you can consume it from a client application.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
Patient: [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22]
diabetic
###Markdown
You can also send multiple patient observations to the service, and get back a prediction for each one.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
The code above uses the Azure Machine Learning SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure Machine Learning SDK, but simply make HTTP requests to the web service.Let's determine the URL to which these applications must submit their requests:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
http://356afa6e-f5e5-4a25-bffe-e1d215826ebb.westus2.azurecontainer.io/score
###Markdown
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON format, and receive back the predicted class(es).
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling token-based authentication. This would require REST requests to include an **Authorization** header. Delete the serviceWhen you no longer need your service, you should delete it to avoid incurring unecessary charges.
###Code
service.delete()
print ('Service deleted.')
###Output
Service deleted.
###Markdown
Create a real-time inferencing serviceAfter training a predictive model, you can deploy it as a real-time service that clients can use to get predictions from new data. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Train and register a modelNow let's train and register a model.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
_____no_output_____
###Markdown
Deploy the model as a web serviceYou have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.First, let's determine what models you have registered in the workspace.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
_____no_output_____
###Markdown
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
###Code
import os
folder_name = 'diabetes_service'
# Create a folder for the web service files
experiment_folder = './' + folder_name
os.makedirs(experiment_folder, exist_ok=True)
print(folder_name, 'folder created.')
# Set path for scoring script and environment files
script_file = os.path.join(experiment_folder,"score_diabetes.py")
env_file = os.path.join(experiment_folder,"diabetes_env.yml")
###Output
_____no_output_____
###Markdown
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* (often called a *scoring script*) that will be deployed to the web service:
###Code
%%writefile $script_file
import json
import joblib
import numpy as np
from azureml.core.model import Model
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
_____no_output_____
###Markdown
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires the **scikit-learn** and **azureml-defaults** packages, so we'll create a .yml file that tells the container host to install them into the environment.
###Code
%%writefile $env_file
name: inference_env
dependencies:
- python=3.6.2
- scikit-learn
- pip
- pip:
- azureml-defaults
###Output
_____no_output_____
###Markdown
Now you're ready to deploy. We'll deploy the container a service named **diabetes-service**. The deployment process includes the following steps:1. Define an inference configuration, which includes the scoring and environment files required to load and use the model.2. Define a deployment configuration that defines the execution environment in which the service will be hosted. In this case, an Azure Container Instance.3. Deploy the model as a web service.4. Verify the status of the deployed service.> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
###Code
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# Configure the scoring environment
inference_config = InferenceConfig(runtime= "python",
entry_script=script_file,
conda_file=env_file)
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
###Output
_____no_output_____
###Markdown
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to get the service logs to help you troubleshoot.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
_____no_output_____
###Markdown
Take a look at your workspace in [Azure Machine Learning Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.You can also retrieve the names of web services in your workspace by running the following code:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
_____no_output_____
###Markdown
Use the web serviceWith the service deployed, now you can consume it from a client application.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
_____no_output_____
###Markdown
You can also send multiple patient observations to the service, and get back a prediction for each one.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
The code above uses the Azure Machine Learning SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure Machine Learning SDK, but simply make HTTP requests to the web service.Let's determine the URL to which these applications must submit their requests:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
_____no_output_____
###Markdown
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON format, and receive back the predicted class(es).
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling token-based authentication. This would require REST requests to include an **Authorization** header. Delete the serviceWhen you no longer need your service, you should delete it to avoid incurring unecessary charges.
###Code
service.delete()
print ('Service deleted.')
###Output
_____no_output_____
###Markdown
リアルタイム推論サービスを作成する
予測モデルのトレーニング後、クライアントが新しいデータから予測を取得するために使用できるリアルタイム サービスとしてモデルをデプロイできます。 ワークスペースに接続する
作業を開始するには、ワークスペースに接続します。
> **注**: Azure サブスクリプションでまだ認証済みのセッションを確立していない場合は、リンクをクリックして認証コードを入力し、Azure にサインインして認証するよう指示されます。
###Code
import azureml.core
from azureml.core import Workspace
# 保存された構成ファイルからワークスペースを読み込む
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
モデルをトレーニングして登録する
それでは、モデルをトレーニングして登録しましょう。
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# ワークスペースで Azure 実験を作成する
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# 糖尿病データセットを読み込む
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# 特徴とラベルを分離する
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# データをトレーニング セットとテスト セットに分割する
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# デシジョン ツリー モデルをトレーニングする
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# 精度を計算する
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# AUC を計算する
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# トレーニング済みモデルを保存する
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# 実行を完了する
run.complete()
# モデルを登録する
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
_____no_output_____
###Markdown
モデルを Web サービスとして公開する
糖尿病の可能性に基づいて患者を分類する機械学習モデルをトレーニングし、登録しました。このモデルは、糖尿病の臨床検査を受ける必要があるとリスクがあると考えられる患者のみが必要な医師の手術などの運用環境で使用できます。このシナリオをサポートするには、モデルを Web サービスとしてデプロイします。
まず、ワークスペースに登録したモデルを決定しましょう。
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
それでは、デプロイしたいモデルを取得しましょう。既定では、モデル名を指定すると、最新バージョンが返されます。
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
_____no_output_____
###Markdown
このモデルをホストする Web サービスを作成しますが、これにはコードと構成ファイルが必要です。そのため、それらのフォルダーを作成してみましょう。
###Code
import os
folder_name = 'diabetes_service'
# Web サービス ファイル用フォルダーを作成する
experiment_folder = './' + folder_name
os.makedirs(experiment_folder, exist_ok=True)
print(folder_name, 'folder created.')
# スクリプトと環境ファイルをスコアリングするためのパスを設定する
script_file = os.path.join(experiment_folder,"score_diabetes.py")
env_file = os.path.join(experiment_folder,"diabetes_env.yml")
###Output
_____no_output_____
###Markdown
モデルをデプロイする Web サービスでは、入力データを読み込み、ワークスペースからモデルを取得し、予測を生成して返すために、Python コードが必要になります。このコードは、Web サービスにデプロイされる*エントリ スクリプト* (頻繁に*スコアリング スクリプト*と呼ばれます) に保存します。
###Code
%%writefile $script_file
import json
import joblib
import numpy as np
from azureml.core.model import Model
# サービスの読み込み時に呼び出される
def init():
global model
# Get the path to the deployed model file and load it
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# 要求の受信時に呼び出される
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
_____no_output_____
###Markdown
Web サービスはコンテナーでホストされ、コンテナーは初期化されるときに必要な Python 依存関係をインストールする必要があります。この場合、スコアリング コードには **scikit-learn** と **azureml-defaults** パッケージが必要なので、コンテナー ホストに環境にインストールするよう指示する .yml ファイルを作成します。
###Code
%%writefile $env_file
name: inference_env
dependencies:
- python=3.6.2
- scikit-learn
- pip
- pip:
- azureml-defaults
###Output
_____no_output_____
###Markdown
これでデプロイする準備ができました。コンテナーに **diabetes-service**.という名前のサービスをデプロイします。デプロイ プロセスには、次のステップが含まれます。
1. モデルの読み込みと使用に必要なスコアリング ファイルと環境ファイルを含む推論構成を定義します。
2. サービスをホストする実行環境を定義するデプロイメント構成を定義します。この場合、Azure Container Instances。
3. モデルを Web サービスとしてデプロイする
4. デプロイされたサービスの状態を確認します。
> **詳細情報**: モデル デプロイ、ターゲット実行環境のオプションの詳細については、[ドキュメント](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where)を参照してください。
デプロイは、最初にコンテナー イメージを作成するプロセスを実行し、そのイメージに基づいて Web サービスを作成するプロセスを実行するため、時間がかかります。デプロイが正常に完了すると、**正常**な状態が表示されます。
###Code
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# スコアリング環境を構成する
inference_config = InferenceConfig(runtime= "python",
entry_script=script_file,
conda_file=env_file)
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
###Output
_____no_output_____
###Markdown
うまくいけば、デプロイが成功し、**正常**な状態を確認できます。確認できない場合は、次のコードを使用して、トラブルシューティングに役立つサービス ログを取得できます。
###Code
print(service.get_logs())
# 変更を行って再デプロイする必要がある場合は、次のコードを使用して異常なサービスを削除することが必要となる可能性があります。
#service.delete()
###Output
_____no_output_____
###Markdown
[Azure Machine Learning Studio](https://ml.azure.com) でワークスペースを確認し、ワークスペースにデプロイされたサービスを示す**エンドポイント**ページを表示します。
次のコードを実行して、ワークスペース内の Web サービスの名前を取得することもできます。
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
_____no_output_____
###Markdown
Web サービスを使用する
サービスをデプロイしたら、クライアント アプリケーションからサービスを使用できます。
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# JSON ドキュメントでシリアル化可能なリストに配列を変換する
input_json = json.dumps({"data": x_new})
# 入力データを渡して Web サービスを呼び出す (Web サービスはバイナリ形式のデータも受け入れます)
predictions = service.run(input_data = input_json)
# 予測されたクラスを取得する - それは最初の (そして唯一の) クラスになります。
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
_____no_output_____
###Markdown
また、複数の患者の観察をサービスに送信し、それぞれの予測を取得することもできます。
###Code
import json
# 今回の入力は、2 つの特徴配列のひとつです。
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# JSON ドキュメント内のシリアル化可能なリストに配列を変換する
input_json = json.dumps({"data": x_new})
# Web サービスを呼び出して入力データを渡す
predictions = service.run(input_data = input_json)
# 予測されたクラスを取得する
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
上記のコードでは、Azure Machine Learning SDK を使用してコンテナー化された Web サービスに接続し、それを使用して糖尿病分類モデルから予測を生成しています。運用環境では、Azure Machine Learning SDK を使用せず、単に Web サービスに HTTP 要求を行うビジネス アプリケーションによってモデルが使用される可能性があります。
これらのアプリケーションが要求を送信する必要がある URL を決定しましょう。
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
_____no_output_____
###Markdown
エンドポイント URI がわかったので、アプリケーションは HTTP 要求を行い、患者データを JSON 形式で送信し、予測されたクラスを受け取ることができます。
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# JSON ドキュメントでシリアル化可能なリストに配列を変換する
input_json = json.dumps({"data": x_new})
# コンテンツ タイプを設定する
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
認証を必要としない Azure Container Instances (ACI) サービスとして Web サービスをデプロイしました。これは開発とテストには適していますが、運用環境では Azure Kubernetes Service (AKS) クラスターへのデプロイとトークンベースの認証の有効化を検討する必要があります。これには、**Authorization** ヘッダーを含める REST 要求が必要です。
サービスを削除する
サービスが不要になった場合は、不要な料金が発生しないように削除する必要があります。
###Code
service.delete()
print ('Service deleted.')
###Output
_____no_output_____
###Markdown
Create a real-time inferencing serviceAfter training a predictive model, you can deploy it as a real-time service that clients can use to get predictions from new data. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.34.0 to work with azml-ws
###Markdown
Train and register a modelNow let's train and register a model.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
Starting experiment: mslearn-train-diabetes
Loading Data...
Training a decision tree model
Accuracy: 0.8876666666666667
AUC: 0.8738817851634408
Model trained and registered.
###Markdown
Deploy the model as a web serviceYou have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.First, let's determine what models you have registered in the workspace.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 7
Training context : Inline Training
AUC : 0.8738817851634408
Accuracy : 0.8876666666666667
diabetes_model version: 6
Training context : Pipeline
AUC : 0.8816080060095506
Accuracy : 0.8971111111111111
diabetes_model version: 5
Training context : Compute cluster
AUC : 0.8801195539554468
Accuracy : 0.896
diabetes_model version: 4
Training context : File dataset
AUC : 0.8468497021067503
Accuracy : 0.7788888888888889
diabetes_model version: 3
Training context : Tabular dataset
AUC : 0.8568595320655352
Accuracy : 0.7891111111111111
diabetes_model version: 2
Training context : Parameterized script
AUC : 0.8484377332205582
Accuracy : 0.774
diabetes_model version: 1
Training context : Script
AUC : 0.8483377282451863
Accuracy : 0.774
amlstudio-designer-predict-dia version: 1
CreatedByAMLStudio : true
AutoMLc4da1b9a60 version: 1
###Markdown
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
diabetes_model version 7
###Markdown
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
###Code
import os
# Create a folder for the deployment files
deployment_folder = './diabetes_service'
os.makedirs(deployment_folder, exist_ok=True)
print(deployment_folder, 'folder created.')
# Set path for scoring script
script_file = 'score_diabetes.py'
script_path = os.path.join(deployment_folder,script_file)
###Output
./diabetes_service folder created.
###Markdown
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* (often called a *scoring script*) that will be deployed to the web service.The script consists of two functions:- **init**: This fucntion is called when the service is initialized, and is generally used to load the model. Note that the scoring script uses the **AZUREML_MODEL_DIR** environment variable to determine the folder where the model is stored.- **run**: This function is called each time a client application submits new data, and is generally used to inference predictions from the model.
###Code
%%writefile $script_path
import json
import joblib
import numpy as np
import os
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'diabetes_model.pkl')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
Writing ./diabetes_service/score_diabetes.py
###Markdown
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires **scikit-learn** and some Azure Machine Learning specific packages that are used by the scoring web service, so we'll create an environment that included these. Then we'll add that environment to an *inference configuration* along with the scoring script, and define a *deployment configuration* for the container in which the environment and script will be hosted.We can then deploy the model as a service based on these configurations.> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
###Code
from azureml.core import Environment
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
# Configure the scoring environment
service_env = Environment(name='service-env')
python_packages = ['scikit-learn', 'azureml-defaults', 'azure-ml-api-sdk']
for package in python_packages:
service_env.python.conda_dependencies.add_pip_package(package)
inference_config = InferenceConfig(source_directory=deployment_folder,
entry_script=script_file,
environment=service_env)
# Configure the web service container
deployment_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)
# Deploy the model as a service
print('Deploying model...')
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config, overwrite=True)
service.wait_for_deployment(True)
print(service.state)
###Output
Deploying model...
Tips: You can try get_logs(): https://aka.ms/debugimage#dockerlog or local deployment: https://aka.ms/debugimage#debug-locally to debug if deployment takes longer than 10 minutes.
Running
2021-11-03 09:19:15+00:00 Creating Container Registry if not exists.
2021-11-03 09:19:15+00:00 Registering the environment.
2021-11-03 09:19:19+00:00 Building image..
2021-11-03 09:23:28+00:00 Generating deployment configuration..
2021-11-03 09:23:29+00:00 Submitting deployment to compute..
2021-11-03 09:23:47+00:00 Checking the status of deployment diabetes-service..
2021-11-03 09:25:56+00:00 Checking the status of inference endpoint diabetes-service.
Succeeded
ACI service creation operation finished, operation "Succeeded"
Healthy
###Markdown
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to get the service logs to help you troubleshoot.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
2021-11-03T09:25:41,863502100+00:00 - iot-server/run
2021-11-03T09:25:41,872147300+00:00 - gunicorn/run
Dynamic Python package installation is disabled.
Starting HTTP server
2021-11-03T09:25:41,872843200+00:00 - rsyslog/run
2021-11-03T09:25:42,021310400+00:00 - nginx/run
EdgeHubConnectionString and IOTEDGE_IOTHUBHOSTNAME are not set. Exiting...
2021-11-03T09:25:42,675010200+00:00 - iot-server/finish 1 0
2021-11-03T09:25:42,681317000+00:00 - Exit code 1 is normal. Not restarting iot-server.
Starting gunicorn 20.1.0
Listening at: http://127.0.0.1:31311 (72)
Using worker: sync
worker timeout is set to 300
Booting worker with pid: 99
SPARK_HOME not set. Skipping PySpark Initialization.
Initializing logger
2021-11-03 09:25:44,009 | root | INFO | Starting up app insights client
logging socket was found. logging is available.
logging socket was found. logging is available.
2021-11-03 09:25:44,011 | root | INFO | Starting up request id generator
2021-11-03 09:25:44,011 | root | INFO | Starting up app insight hooks
2021-11-03 09:25:44,011 | root | INFO | Invoking user's init function
/azureml-envs/azureml_b111972b96fe2f23e1032e165eb7c9c3/lib/python3.6/site-packages/sklearn/base.py:315: UserWarning: Trying to unpickle estimator DecisionTreeClassifier from version 0.22.2.post1 when using version 0.24.2. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
no request id,/azureml-envs/azureml_b111972b96fe2f23e1032e165eb7c9c3/lib/python3.6/site-packages/sklearn/base.py:315: UserWarning: Trying to unpickle estimator DecisionTreeClassifier from version 0.22.2.post1 when using version 0.24.2. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
2021-11-03 09:25:44,851 | root | INFO | Users's init has completed successfully
2021-11-03 09:25:44,864 | root | INFO | Skipping middleware: dbg_model_info as it's not enabled.
2021-11-03 09:25:44,864 | root | INFO | Skipping middleware: dbg_resource_usage as it's not enabled.
2021-11-03 09:25:44,865 | root | INFO | Scoring timeout is found from os.environ: 60000 ms
2021-11-03 09:25:56,744 | root | INFO | Swagger file not present
2021-11-03 09:25:56,744 | root | INFO | 404
127.0.0.1 - - [03/Nov/2021:09:25:56 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
2021-11-03 09:26:00,063 | root | INFO | Swagger file not present
2021-11-03 09:26:00,064 | root | INFO | 404
127.0.0.1 - - [03/Nov/2021:09:26:00 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
###Markdown
Take a look at your workspace in [Azure Machine Learning Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.You can also retrieve the names of web services in your workspace by running the following code:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
diabetes-service
###Markdown
Use the web serviceWith the service deployed, now you can consume it from a client application.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
_____no_output_____
###Markdown
You can also send multiple patient observations to the service, and get back a prediction for each one.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
The code above uses the Azure Machine Learning SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure Machine Learning SDK, but simply make HTTP requests to the web service.Let's determine the URL to which these applications must submit their requests:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
_____no_output_____
###Markdown
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON format, and receive back the predicted class(es).
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling token-based authentication. This would require REST requests to include an **Authorization** header. Delete the serviceWhen you no longer need your service, you should delete it to avoid incurring unecessary charges.
###Code
service.delete()
print ('Service deleted.')
###Output
_____no_output_____
###Markdown
Create a real-time inferencing serviceAfter training a predictive model, you can deploy it as a real-time service that clients can use to get predictions from new data. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.26.0 to work with mls-dp100
###Markdown
Train and register a modelNow let's train and register a model.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
Starting experiment: mslearn-train-diabetes
Loading Data...
Training a decision tree model
Accuracy: 0.888
AUC: 0.8751082143390218
Model trained and registered.
###Markdown
Deploy the model as a web serviceYou have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.First, let's determine what models you have registered in the workspace.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 6
Training context : Inline Training
AUC : 0.8751082143390218
Accuracy : 0.888
diabetes_model version: 5
Training context : Compute cluster
AUC : 0.886087297746153
Accuracy : 0.9011111111111111
diabetes_model version: 4
Training context : File dataset
AUC : 0.8468331741963582
Accuracy : 0.7793333333333333
diabetes_model version: 3
Training context : Tabular dataset
AUC : 0.8568509052814499
Accuracy : 0.7891111111111111
diabetes_model version: 2
Training context : Parameterized script
AUC : 0.8484357430717946
Accuracy : 0.774
diabetes_model version: 1
Training context : Script
AUC : 0.8483203144435048
Accuracy : 0.774
amlstudio-designer-predict-dia version: 1
CreatedByAMLStudio : true
AutoML29253f2ad0 version: 1
###Markdown
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
diabetes_model version 6
###Markdown
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
###Code
import os
folder_name = 'diabetes_service'
# Create a folder for the web service files
experiment_folder = './' + folder_name
os.makedirs(experiment_folder, exist_ok=True)
print(folder_name, 'folder created.')
# Set path for scoring script
script_file = os.path.join(experiment_folder,"score_diabetes.py")
###Output
diabetes_service folder created.
###Markdown
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* (often called a *scoring script*) that will be deployed to the web service:
###Code
%%writefile $script_file
import json
import joblib
import numpy as np
from azureml.core.model import Model
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
Writing ./diabetes_service/score_diabetes.py
###Markdown
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires **scikit-learn**, so we'll create a .yml file that tells the container host to install this into the environment.
###Code
from azureml.core.conda_dependencies import CondaDependencies
# Add the dependencies for our model (AzureML defaults is already included)
myenv = CondaDependencies()
myenv.add_conda_package('scikit-learn')
# Save the environment config as a .yml file
env_file = os.path.join(experiment_folder,"diabetes_env.yml")
with open(env_file,"w") as f:
f.write(myenv.serialize_to_string())
print("Saved dependency info in", env_file)
# Print the .yml file
with open(env_file,"r") as f:
print(f.read())
###Output
Saved dependency info in ./diabetes_service/diabetes_env.yml
# Conda environment specification. The dependencies defined in this file will
# be automatically provisioned for runs with userManagedDependencies=False.
# Details about the Conda environment file format:
# https://conda.io/docs/user-guide/tasks/manage-environments.html#create-env-file-manually
name: project_environment
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-defaults
- scikit-learn
channels:
- anaconda
- conda-forge
###Markdown
Now you're ready to deploy. We'll deploy the container a service named **diabetes-service**. The deployment process includes the following steps:1. Define an inference configuration, which includes the scoring and environment files required to load and use the model.2. Define a deployment configuration that defines the execution environment in which the service will be hosted. In this case, an Azure Container Instance.3. Deploy the model as a web service.4. Verify the status of the deployed service.> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
###Code
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# Configure the scoring environment
inference_config = InferenceConfig(runtime= "python",
entry_script=script_file,
conda_file=env_file)
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
###Output
Tips: You can try get_logs(): https://aka.ms/debugimage#dockerlog or local deployment: https://aka.ms/debugimage#debug-locally to debug if deployment takes longer than 10 minutes.
Running
2021-04-09 01:17:54+00:00 Creating Container Registry if not exists.
2021-04-09 01:17:55+00:00 Building image..
2021-04-09 01:23:54+00:00 Generating deployment configuration.
2021-04-09 01:23:55+00:00 Submitting deployment to compute..
2021-04-09 01:24:01+00:00 Checking the status of deployment diabetes-service..
2021-04-09 01:29:57+00:00 Checking the status of inference endpoint diabetes-service.
Succeeded
ACI service creation operation finished, operation "Succeeded"
Healthy
###Markdown
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to get the service logs to help you troubleshoot.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
2021-04-09T01:29:49,584395000+00:00 - rsyslog/run
2021-04-09T01:29:49,596164700+00:00 - gunicorn/run
2021-04-09T01:29:49,597193000+00:00 - iot-server/run
2021-04-09T01:29:49,742188700+00:00 - nginx/run
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libcrypto.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libcrypto.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
EdgeHubConnectionString and IOTEDGE_IOTHUBHOSTNAME are not set. Exiting...
2021-04-09T01:29:50,416744900+00:00 - iot-server/finish 1 0
2021-04-09T01:29:50,423770000+00:00 - Exit code 1 is normal. Not restarting iot-server.
Starting gunicorn 19.9.0
Listening at: http://127.0.0.1:31311 (66)
Using worker: sync
worker timeout is set to 300
Booting worker with pid: 98
SPARK_HOME not set. Skipping PySpark Initialization.
Initializing logger
2021-04-09 01:29:53,248 | root | INFO | Starting up app insights client
2021-04-09 01:29:53,248 | root | INFO | Starting up request id generator
2021-04-09 01:29:53,248 | root | INFO | Starting up app insight hooks
2021-04-09 01:29:53,248 | root | INFO | Invoking user's init function
/azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/python3.6/site-packages/sklearn/base.py:334: UserWarning: Trying to unpickle estimator DecisionTreeClassifier from version 0.22.2.post1 when using version 0.23.2. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
2021-04-09 01:29:54,200 | root | INFO | Users's init has completed successfully
2021-04-09 01:29:54,203 | root | INFO | Skipping middleware: dbg_model_info as it's not enabled.
2021-04-09 01:29:54,203 | root | INFO | Skipping middleware: dbg_resource_usage as it's not enabled.
2021-04-09 01:29:54,204 | root | INFO | Scoring timeout is found from os.environ: 60000 ms
2021-04-09 01:30:03,300 | root | INFO | Swagger file not present
2021-04-09 01:30:03,300 | root | INFO | 404
127.0.0.1 - - [09/Apr/2021:01:30:03 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
2021-04-09 01:30:09,219 | root | INFO | Swagger file not present
2021-04-09 01:30:09,219 | root | INFO | 404
127.0.0.1 - - [09/Apr/2021:01:30:09 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
###Markdown
Take a look at your workspace in [Azure Machine Learning Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.You can also retrieve the names of web services in your workspace by running the following code:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
diabetes-service
###Markdown
Use the web serviceWith the service deployed, now you can consume it from a client application.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
Patient: [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22]
diabetic
###Markdown
You can also send multiple patient observations to the service, and get back a prediction for each one.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
The code above uses the Azure Machine Learning SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure Machine Learning SDK, but simply make HTTP requests to the web service.Let's determine the URL to which these applications must submit their requests:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
http://4ee8b40b-1c5f-466b-a90a-2eba1e22c6e2.eastasia.azurecontainer.io/score
###Markdown
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON format, and receive back the predicted class(es).
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling token-based authentication. This would require REST requests to include an **Authorization** header. Delete the serviceWhen you no longer need your service, you should delete it to avoid incurring unecessary charges.
###Code
service.delete()
print ('Service deleted.')
###Output
Service deleted.
###Markdown
Create a real-time inferencing serviceAfter training a predictive model, you can deploy it as a real-time service that clients can use to get predictions from new data. Before you startIf you haven't already done so, you must install the latest version of the **azureml-sdk** and **azureml-widgets** packages before running this notebook. To do this, run the cell below and then ***restart the kernel*** before running the subsequent cells.
###Code
!pip install --upgrade azureml-sdk azureml-widgets
###Output
_____no_output_____
###Markdown
Connect to your workspaceWith the latest version of the SDK installed, now you're ready to connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Train and register a modelNow let's train and register a model.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
_____no_output_____
###Markdown
Deploy the model as a web serviceYou have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.First, let's determine what models you have registered in the workspace.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
_____no_output_____
###Markdown
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
###Code
import os
folder_name = 'diabetes_service'
# Create a folder for the web service files
experiment_folder = './' + folder_name
os.makedirs(experiment_folder, exist_ok=True)
print(folder_name, 'folder created.')
# Set path for scoring script
script_file = os.path.join(experiment_folder,"score_diabetes.py")
###Output
_____no_output_____
###Markdown
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* (often called a *scoring script*) that will be deployed to the web service:
###Code
%%writefile $script_file
import json
import joblib
import numpy as np
from azureml.core.model import Model
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
_____no_output_____
###Markdown
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires **scikit-learn**, so we'll create a .yml file that tells the container host to install this into the environment.
###Code
from azureml.core.conda_dependencies import CondaDependencies
# Add the dependencies for our model (AzureML defaults is already included)
myenv = CondaDependencies()
myenv.add_conda_package('scikit-learn')
# Save the environment config as a .yml file
env_file = os.path.join(experiment_folder,"diabetes_env.yml")
with open(env_file,"w") as f:
f.write(myenv.serialize_to_string())
print("Saved dependency info in", env_file)
# Print the .yml file
with open(env_file,"r") as f:
print(f.read())
###Output
_____no_output_____
###Markdown
Now you're ready to deploy. We'll deploy the container a service named **diabetes-service**. The deployment process includes the following steps:1. Define an inference configuration, which includes the scoring and environment files required to load and use the model.2. Define a deployment configuration that defines the execution environment in which the service will be hosted. In this case, an Azure Container Instance.3. Deploy the model as a web service.4. Verify the status of the deployed service.> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
###Code
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# Configure the scoring environment
inference_config = InferenceConfig(runtime= "python",
entry_script=script_file,
conda_file=env_file)
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
###Output
_____no_output_____
###Markdown
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to get the service logs to help you troubleshoot.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
_____no_output_____
###Markdown
Take a look at your workspace in [Azure Machine Learning Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.You can also retrieve the names of web services in your workspace by running the following code:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
_____no_output_____
###Markdown
Use the web serviceWith the service deployed, now you can consume it from a client application.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
_____no_output_____
###Markdown
You can also send multiple patient observations to the service, and get back a prediction for each one.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
The code above uses the Azure Machine Learning SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure Machine Learning SDK, but simply make HTTP requests to the web service.Let's determine the URL to which these applications must submit their requests:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
_____no_output_____
###Markdown
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON format, and receive back the predicted class(es).
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling token-based authentication. This would require REST requests to include an **Authorization** header. Delete the serviceWhen you no longer need your service, you should delete it to avoid incurring unecessary charges.
###Code
service.delete()
print ('Service deleted.')
###Output
_____no_output_____
###Markdown
Create a real-time inferencing serviceAfter training a predictive model, you can deploy it as a real-time service that clients can use to get predictions from new data. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Train and register a modelNow let's train and register a model.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
_____no_output_____
###Markdown
Deploy the model as a web serviceYou have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.First, let's determine what models you have registered in the workspace.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
_____no_output_____
###Markdown
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
###Code
import os
# Create a folder for the deployment files
deployment_folder = './diabetes_service'
os.makedirs(deployment_folder, exist_ok=True)
print(deployment_folder, 'folder created.')
# Set path for scoring script
script_file = 'score_diabetes.py'
script_path = os.path.join(deployment_folder,script_file)
###Output
_____no_output_____
###Markdown
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* (often called a *scoring script*) that will be deployed to the web service.The script consists of two functions:- **init**: This fucntion is called when the service is initialized, and is generally used to load the model. Note that the scoring script uses the **AZUREML_MODEL_DIR** environment variable to determine the folder where the model is stored.- **run**: This function is called each time a client application submits new data, and is generally used to inference predictions from the model.
###Code
%%writefile $script_path
import json
import joblib
import numpy as np
import os
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'diabetes_model.pkl')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
_____no_output_____
###Markdown
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires **scikit-learn** and some Azure Machine Learning specific packages that are used by the scoring web service, so we'll create an environment that included these. Then we'll add that environment to an *inference configuration* along with the scoring script, and define a *deployment configuration* for the container in which the environment and script will be hosted.We can then deploy the model as a service based on these configurations.> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
###Code
from azureml.core import Environment
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
# Configure the scoring environment
service_env = Environment(name='service-env')
python_packages = ['scikit-learn', 'azureml-defaults', 'azure-ml-api-sdk']
for package in python_packages:
service_env.python.conda_dependencies.add_pip_package(package)
inference_config = InferenceConfig(source_directory=deployment_folder,
entry_script=script_file,
environment=service_env)
# Configure the web service container
deployment_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)
# Deploy the model as a service
print('Deploying model...')
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config, overwrite=True)
service.wait_for_deployment(True)
print(service.state)
###Output
_____no_output_____
###Markdown
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to get the service logs to help you troubleshoot.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
_____no_output_____
###Markdown
Take a look at your workspace in [Azure Machine Learning Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.You can also retrieve the names of web services in your workspace by running the following code:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
_____no_output_____
###Markdown
Use the web serviceWith the service deployed, now you can consume it from a client application.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
_____no_output_____
###Markdown
You can also send multiple patient observations to the service, and get back a prediction for each one.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
The code above uses the Azure Machine Learning SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure Machine Learning SDK, but simply make HTTP requests to the web service.Let's determine the URL to which these applications must submit their requests:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
_____no_output_____
###Markdown
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON format, and receive back the predicted class(es).
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling token-based authentication. This would require REST requests to include an **Authorization** header. Delete the serviceWhen you no longer need your service, you should delete it to avoid incurring unecessary charges.
###Code
service.delete()
print ('Service deleted.')
###Output
_____no_output_____
###Markdown
Create a real-time inferencing serviceAfter training a predictive model, you can deploy it as a real-time service that clients can use to get predictions from new data. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.34.0 to work with ict-915-02-jmdl
###Markdown
Train and register a modelNow let's train and register a model.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
Starting experiment: mslearn-train-diabetes
Loading Data...
Training a decision tree model
Accuracy: 0.8863333333333333
AUC: 0.8750708990497039
Model trained and registered.
###Markdown
Deploy the model as a web serviceYou have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.First, let's determine what models you have registered in the workspace.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 8
Training context : Inline Training
AUC : 0.8750708990497039
Accuracy : 0.8863333333333333
diabetes_model version: 7
Training context : Inline Training
AUC : 0.876867008308871
Accuracy : 0.8903333333333333
diabetes_model version: 6
Training context : Pipeline
AUC : 0.8857524015639696
Accuracy : 0.9006666666666666
diabetes_model version: 5
Training context : Compute cluster
AUC : 0.8829290099725624
Accuracy : 0.898
diabetes_model version: 4
Training context : File dataset
AUC : 0.8568743524381947
Accuracy : 0.7891111111111111
diabetes_model version: 3
Training context : Tabular dataset
AUC : 0.8568509052814499
Accuracy : 0.7891111111111111
diabetes_model version: 2
Training context : Parameterized script
AUC : 0.8483103636996865
Accuracy : 0.7746666666666666
diabetes_model version: 1
Training context : Script
AUC : 0.8483377282451863
Accuracy : 0.774
AutoMLc4345bc5e0 version: 1
###Markdown
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
diabetes_model version 8
###Markdown
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
###Code
import os
# Create a folder for the deployment files
deployment_folder = './diabetes_service'
os.makedirs(deployment_folder, exist_ok=True)
print(deployment_folder, 'folder created.')
# Set path for scoring script
script_file = 'score_diabetes.py'
script_path = os.path.join(deployment_folder,script_file)
###Output
./diabetes_service folder created.
###Markdown
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* (often called a *scoring script*) that will be deployed to the web service.The script consists of two functions:- **init**: This function is called when the service is initialized, and is generally used to load the model. Note that the scoring script uses the **AZUREML_MODEL_DIR** environment variable to determine the folder where the model is stored.- **run**: This function is called each time a client application submits new data, and is generally used to inference predictions from the model.
###Code
%%writefile $script_path
import json
import joblib
import numpy as np
import os
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'diabetes_model.pkl')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
Writing ./diabetes_service/score_diabetes.py
###Markdown
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires **scikit-learn** and some Azure Machine Learning specific packages that are used by the scoring web service, so we'll create an environment that included these. Then we'll add that environment to an *inference configuration* along with the scoring script, and define a *deployment configuration* for the container in which the environment and script will be hosted.We can then deploy the model as a service based on these configurations.> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
###Code
from azureml.core import Environment
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
# Configure the scoring environment
service_env = Environment(name='service-env')
python_packages = ['scikit-learn', 'azureml-defaults', 'azure-ml-api-sdk']
for package in python_packages:
service_env.python.conda_dependencies.add_pip_package(package)
inference_config = InferenceConfig(source_directory=deployment_folder,
entry_script=script_file,
environment=service_env)
# Configure the web service container
deployment_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)
# Deploy the model as a service
print('Deploying model...')
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config, overwrite=True)
service.wait_for_deployment(True)
print(service.state)
###Output
Deploying model...
Tips: You can try get_logs(): https://aka.ms/debugimage#dockerlog or local deployment: https://aka.ms/debugimage#debug-locally to debug if deployment takes longer than 10 minutes.
Running
2021-10-08 23:39:06+00:00 Creating Container Registry if not exists.
2021-10-08 23:39:06+00:00 Registering the environment.
2021-10-08 23:39:08+00:00 Building image..
2021-10-08 23:44:22+00:00 Generating deployment configuration.
2021-10-08 23:44:23+00:00 Submitting deployment to compute..
2021-10-08 23:44:28+00:00 Checking the status of deployment diabetes-service..
2021-10-08 23:47:07+00:00 Checking the status of inference endpoint diabetes-service.
Succeeded
ACI service creation operation finished, operation "Succeeded"
Healthy
###Markdown
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to get the service logs to help you troubleshoot.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
2021-10-08T23:46:57,082050100+00:00 - iot-server/run
2021-10-08T23:46:57,086304600+00:00 - rsyslog/run
2021-10-08T23:46:57,096378800+00:00 - gunicorn/run
Dynamic Python package installation is disabled.
Starting HTTP server
2021-10-08T23:46:57,144416800+00:00 - nginx/run
EdgeHubConnectionString and IOTEDGE_IOTHUBHOSTNAME are not set. Exiting...
2021-10-08T23:46:57,501763800+00:00 - iot-server/finish 1 0
2021-10-08T23:46:57,504008700+00:00 - Exit code 1 is normal. Not restarting iot-server.
Starting gunicorn 20.1.0
Listening at: http://127.0.0.1:31311 (70)
Using worker: sync
worker timeout is set to 300
Booting worker with pid: 96
SPARK_HOME not set. Skipping PySpark Initialization.
Initializing logger
2021-10-08 23:46:58,650 | root | INFO | Starting up app insights client
logging socket was found. logging is available.
logging socket was found. logging is available.
2021-10-08 23:46:58,652 | root | INFO | Starting up request id generator
2021-10-08 23:46:58,652 | root | INFO | Starting up app insight hooks
2021-10-08 23:46:58,652 | root | INFO | Invoking user's init function
no request id,/azureml-envs/azureml_b111972b96fe2f23e1032e165eb7c9c3/lib/python3.6/site-packages/sklearn/base.py:315: UserWarning: Trying to unpickle estimator DecisionTreeClassifier from version 0.22.2.post1 when using version 0.24.2. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
2021-10-08 23:46:59,387 | root | INFO | Users's init has completed successfully
/azureml-envs/azureml_b111972b96fe2f23e1032e165eb7c9c3/lib/python3.6/site-packages/sklearn/base.py:315: UserWarning: Trying to unpickle estimator DecisionTreeClassifier from version 0.22.2.post1 when using version 0.24.2. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
2021-10-08 23:46:59,394 | root | INFO | Skipping middleware: dbg_model_info as it's not enabled.
2021-10-08 23:46:59,394 | root | INFO | Skipping middleware: dbg_resource_usage as it's not enabled.
2021-10-08 23:46:59,395 | root | INFO | Scoring timeout is found from os.environ: 60000 ms
2021-10-08 23:47:07,777 | root | INFO | Swagger file not present
2021-10-08 23:47:07,778 | root | INFO | 404
127.0.0.1 - - [08/Oct/2021:23:47:07 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
2021-10-08 23:47:10,683 | root | INFO | Swagger file not present
2021-10-08 23:47:10,684 | root | INFO | 404
127.0.0.1 - - [08/Oct/2021:23:47:10 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
###Markdown
Take a look at your workspace in [Azure Machine Learning Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.You can also retrieve the names of web services in your workspace by running the following code:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
diabetes-service
auto-predict-diabetes
###Markdown
Use the web serviceWith the service deployed, now you can consume it from a client application.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
Patient: [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22]
diabetic
###Markdown
You can also send multiple patient observations to the service, and get back a prediction for each one.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
The code above uses the Azure Machine Learning SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure Machine Learning SDK, but simply make HTTP requests to the web service.Let's determine the URL to which these applications must submit their requests:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
http://8bb1547d-15c7-48c7-bc2c-857fff6f5950.eastus.azurecontainer.io/score
###Markdown
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON format, and receive back the predicted class(es).
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling token-based authentication. This would require REST requests to include an **Authorization** header. Delete the serviceWhen you no longer need your service, you should delete it to avoid incurring unecessary charges.
###Code
service.delete()
print ('Service deleted.')
###Output
Service deleted.
###Markdown
Create a real-time inferencing serviceAfter training a predictive model, you can deploy it as a real-time service that clients can use to get predictions from new data. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.34.0 to work with aizat
###Markdown
Train and register a modelNow let's train and register a model.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes-rt_inference")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
Starting experiment: mslearn-train-diabetes-rt_inference
Loading Data...
Training a decision tree model
Accuracy: 0.8866666666666667
AUC: 0.8741031892133936
Model trained and registered.
###Markdown
Deploy the model as a web serviceYou have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.First, let's determine what models you have registered in the workspace.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 8
Training context : Inline Training
AUC : 0.8741031892133936
Accuracy : 0.8866666666666667
diabetes_model version: 7
Training context : Parameterized script
AUC : 0.8484377332205582
Accuracy : 0.774
diabetes_model version: 6
Training context : Script
AUC : 0.8483377282451863
Accuracy : 0.774
diabetes_model version: 5
Training context : Pipeline
AUC : 0.8862361650715226
Accuracy : 0.9004444444444445
diabetes_model version: 4
Training context : File dataset
AUC : 0.8468331741963582
Accuracy : 0.7793333333333333
diabetes_model version: 3
Training context : Tabular dataset
AUC : 0.8568509052814499
Accuracy : 0.7891111111111111
diabetes_model version: 2
Training context : Parameterized script
AUC : 0.8483198169063138
Accuracy : 0.774
diabetes_model version: 1
Training context : Script
AUC : 0.8484929598487486
Accuracy : 0.774
amlstudio-designer-predict-dia version: 1
CreatedByAMLStudio : true
AutoML7001303fd0 version: 1
###Markdown
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
diabetes_model version 8
###Markdown
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
###Code
import os
# Create a folder for the deployment files
deployment_folder = './diabetes_service'
os.makedirs(deployment_folder, exist_ok=True)
print(deployment_folder, 'folder created.')
# Set path for scoring script
script_file = 'score_diabetes.py'
script_path = os.path.join(deployment_folder,script_file)
###Output
./diabetes_service folder created.
###Markdown
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* (often called a *scoring script*) that will be deployed to the web service.The script consists of two functions:- **init**: This fucntion is called when the service is initialized, and is generally used to load the model. Note that the scoring script uses the **AZUREML_MODEL_DIR** environment variable to determine the folder where the model is stored.- **run**: This function is called each time a client application submits new data, and is generally used to inference predictions from the model.
###Code
%%writefile $script_path
import json
import joblib
import numpy as np
import os
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'diabetes_model.pkl')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
Writing ./diabetes_service/score_diabetes.py
###Markdown
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires **scikit-learn** and some Azure Machine Learning specific packages that are used by the scoring web service, so we'll create an environment that included these. Then we'll add that environment to an *inference configuration* along with the scoring script, and define a *deployment configuration* for the container in which the environment and script will be hosted.We can then deploy the model as a service based on these configurations.> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
###Code
from azureml.core import Environment
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
# Configure the scoring environment
service_env = Environment(name='service-env')
python_packages = ['scikit-learn', 'azureml-defaults', 'azure-ml-api-sdk']
for package in python_packages:
service_env.python.conda_dependencies.add_pip_package(package)
inference_config = InferenceConfig(source_directory=deployment_folder,
entry_script=script_file,
environment=service_env)
# Configure the web service container
deployment_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)
# Deploy the model as a service
print('Deploying model...')
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config, overwrite=True)
service.wait_for_deployment(True)
print(service.state)
###Output
Deploying model...
Tips: You can try get_logs(): https://aka.ms/debugimage#dockerlog or local deployment: https://aka.ms/debugimage#debug-locally to debug if deployment takes longer than 10 minutes.
Running
2021-11-24 06:40:56+00:00 Creating Container Registry if not exists.
2021-11-24 06:40:56+00:00 Registering the environment.
2021-11-24 06:40:59+00:00 Building image..
2021-11-24 06:45:47+00:00 Generating deployment configuration..
2021-11-24 06:45:48+00:00 Submitting deployment to compute..
2021-11-24 06:46:00+00:00 Checking the status of deployment diabetes-service..
2021-11-24 06:47:20+00:00 Checking the status of inference endpoint diabetes-service.
Succeeded
ACI service creation operation finished, operation "Succeeded"
Healthy
###Markdown
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to get the service logs to help you troubleshoot.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
2021-11-24T06:47:11,608989900+00:00 - nginx/run
2021-11-24T06:47:11,609193000+00:00 - iot-server/run
2021-11-24T06:47:11,607516700+00:00 - gunicorn/run
Dynamic Python package installation is disabled.
Starting HTTP server
2021-11-24T06:47:11,636166300+00:00 - rsyslog/run
EdgeHubConnectionString and IOTEDGE_IOTHUBHOSTNAME are not set. Exiting...
2021-11-24T06:47:11,947537200+00:00 - iot-server/finish 1 0
2021-11-24T06:47:11,950503900+00:00 - Exit code 1 is normal. Not restarting iot-server.
Starting gunicorn 20.1.0
Listening at: http://127.0.0.1:31311 (67)
Using worker: sync
worker timeout is set to 300
Booting worker with pid: 99
SPARK_HOME not set. Skipping PySpark Initialization.
Initializing logger
2021-11-24 06:47:12,975 | root | INFO | Starting up app insights client
logging socket was found. logging is available.
logging socket was found. logging is available.
2021-11-24 06:47:12,976 | root | INFO | Starting up request id generator
2021-11-24 06:47:12,976 | root | INFO | Starting up app insight hooks
2021-11-24 06:47:12,976 | root | INFO | Invoking user's init function
no request id,/azureml-envs/azureml_b111972b96fe2f23e1032e165eb7c9c3/lib/python3.6/site-packages/sklearn/base.py:315: UserWarning: Trying to unpickle estimator DecisionTreeClassifier from version 0.22.2.post1 when using version 0.24.2. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
2021-11-24 06:47:13,695 | root | INFO | Users's init has completed successfully
/azureml-envs/azureml_b111972b96fe2f23e1032e165eb7c9c3/lib/python3.6/site-packages/sklearn/base.py:315: UserWarning: Trying to unpickle estimator DecisionTreeClassifier from version 0.22.2.post1 when using version 0.24.2. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
2021-11-24 06:47:13,702 | root | INFO | Skipping middleware: dbg_model_info as it's not enabled.
2021-11-24 06:47:13,702 | root | INFO | Skipping middleware: dbg_resource_usage as it's not enabled.
2021-11-24 06:47:13,703 | root | INFO | Scoring timeout is found from os.environ: 60000 ms
2021-11-24 06:47:20,498 | root | INFO | Swagger file not present
2021-11-24 06:47:20,499 | root | INFO | 404
127.0.0.1 - - [24/Nov/2021:06:47:20 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
2021-11-24 06:47:24,077 | root | INFO | Swagger file not present
2021-11-24 06:47:24,077 | root | INFO | 404
127.0.0.1 - - [24/Nov/2021:06:47:24 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
###Markdown
Take a look at your workspace in [Azure Machine Learning Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.You can also retrieve the names of web services in your workspace by running the following code:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
diabetes-service
###Markdown
Use the web serviceWith the service deployed, now you can consume it from a client application.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
Patient: [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22]
diabetic
###Markdown
You can also send multiple patient observations to the service, and get back a prediction for each one.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
The code above uses the Azure Machine Learning SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure Machine Learning SDK, but simply make HTTP requests to the web service.Let's determine the URL to which these applications must submit their requests:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
http://f01d3e9c-0540-4988-8cec-0267a7d42ae2.southeastasia.azurecontainer.io/score
###Markdown
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON format, and receive back the predicted class(es).
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling token-based authentication. This would require REST requests to include an **Authorization** header. Delete the serviceWhen you no longer need your service, you should delete it to avoid incurring unecessary charges.
###Code
service.delete()
print ('Service deleted.')
###Output
Service deleted.
###Markdown
Create a real-time inferencing serviceAfter training a predictive model, you can deploy it as a real-time service that clients can use to get predictions from new data. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.20.0 to work with training_dp100
###Markdown
Train and register a modelNow let's train and register a model.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
Starting experiment: mslearn-train-diabetes
Loading Data...
Training a decision tree model
Accuracy: 0.8923333333333333
AUC: 0.8808124782327478
Model trained and registered.
###Markdown
Deploy the model as a web serviceYou have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.First, let's determine what models you have registered in the workspace.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 7
Training context : Inline Training
AUC : 0.8808124782327478
Accuracy : 0.8923333333333333
diabetes_model version: 6
Training context : Pipeline
AUC : 0.8882269613989017
Accuracy : 0.9022222222222223
diabetes_model version: 5
Training context : Compute cluster
AUC : 0.8814498483013199
Accuracy : 0.8973333333333333
diabetes_model version: 4
Training context : File dataset
AUC : 0.8468497021067503
Accuracy : 0.7788888888888889
diabetes_model version: 3
Training context : Tabular dataset
AUC : 0.8568595320655352
Accuracy : 0.7891111111111111
diabetes_model version: 2
Training context : Parameterized script
AUC : 0.8483203144435048
Accuracy : 0.774
diabetes_model version: 1
Training context : Script
AUC : 0.8483203144435048
Accuracy : 0.774
AutoML4ce9669e80 version: 1
###Markdown
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
diabetes_model version 7
###Markdown
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
###Code
import os
folder_name = 'diabetes_service'
# Create a folder for the web service files
experiment_folder = './' + folder_name
os.makedirs(experiment_folder, exist_ok=True)
print(folder_name, 'folder created.')
# Set path for scoring script
script_file = os.path.join(experiment_folder,"score_diabetes.py")
###Output
diabetes_service folder created.
###Markdown
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* (often called a *scoring script*) that will be deployed to the web service:
###Code
%%writefile $script_file
import json
import joblib
import numpy as np
from azureml.core.model import Model
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
Writing ./diabetes_service/score_diabetes.py
###Markdown
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires **scikit-learn**, so we'll create a .yml file that tells the container host to install this into the environment.
###Code
from azureml.core.conda_dependencies import CondaDependencies
# Add the dependencies for our model (AzureML defaults is already included)
myenv = CondaDependencies()
myenv.add_conda_package('scikit-learn')
# Save the environment config as a .yml file
env_file = os.path.join(experiment_folder,"diabetes_env.yml")
with open(env_file,"w") as f:
f.write(myenv.serialize_to_string())
print("Saved dependency info in", env_file)
# Print the .yml file
with open(env_file,"r") as f:
print(f.read())
###Output
Saved dependency info in ./diabetes_service/diabetes_env.yml
# Conda environment specification. The dependencies defined in this file will
# be automatically provisioned for runs with userManagedDependencies=False.
# Details about the Conda environment file format:
# https://conda.io/docs/user-guide/tasks/manage-environments.html#create-env-file-manually
name: project_environment
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-defaults
- scikit-learn
channels:
- anaconda
- conda-forge
###Markdown
Now you're ready to deploy. We'll deploy the container a service named **diabetes-service**. The deployment process includes the following steps:1. Define an inference configuration, which includes the scoring and environment files required to load and use the model.2. Define a deployment configuration that defines the execution environment in which the service will be hosted. In this case, an Azure Container Instance.3. Deploy the model as a web service.4. Verify the status of the deployed service.> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
###Code
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# Configure the scoring environment
inference_config = InferenceConfig(runtime= "python",
entry_script=script_file,
conda_file=env_file)
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
###Output
Tips: You can try get_logs(): https://aka.ms/debugimage#dockerlog or local deployment: https://aka.ms/debugimage#debug-locally to debug if deployment takes longer than 10 minutes.
Running.........................................................................................................
Succeeded
ACI service creation operation finished, operation "Succeeded"
Healthy
###Markdown
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to get the service logs to help you troubleshoot.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
2021-02-02T15:41:32,507223331+00:00 - iot-server/run
2021-02-02T15:41:32,507261732+00:00 - rsyslog/run
2021-02-02T15:41:32,507893435+00:00 - gunicorn/run
2021-02-02T15:41:32,555582197+00:00 - nginx/run
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libcrypto.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libcrypto.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
/usr/sbin/nginx: /azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/libssl.so.1.0.0: no version information available (required by /usr/sbin/nginx)
EdgeHubConnectionString and IOTEDGE_IOTHUBHOSTNAME are not set. Exiting...
2021-02-02T15:41:32,812672407+00:00 - iot-server/finish 1 0
2021-02-02T15:41:32,819737846+00:00 - Exit code 1 is normal. Not restarting iot-server.
Starting gunicorn 19.9.0
Listening at: http://127.0.0.1:31311 (12)
Using worker: sync
worker timeout is set to 300
Booting worker with pid: 38
SPARK_HOME not set. Skipping PySpark Initialization.
Initializing logger
2021-02-02 15:41:33,909 | root | INFO | Starting up app insights client
2021-02-02 15:41:33,909 | root | INFO | Starting up request id generator
2021-02-02 15:41:33,909 | root | INFO | Starting up app insight hooks
2021-02-02 15:41:33,909 | root | INFO | Invoking user's init function
/azureml-envs/azureml_4b824bcb98517d791c41923f24d65461/lib/python3.6/site-packages/sklearn/base.py:334: UserWarning: Trying to unpickle estimator DecisionTreeClassifier from version 0.22.2.post1 when using version 0.23.2. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
2021-02-02 15:41:34,255 | root | INFO | Users's init has completed successfully
2021-02-02 15:41:34,257 | root | INFO | Skipping middleware: dbg_model_info as it's not enabled.
2021-02-02 15:41:34,257 | root | INFO | Skipping middleware: dbg_resource_usage as it's not enabled.
2021-02-02 15:41:34,258 | root | INFO | Scoring timeout is found from os.environ: 60000 ms
2021-02-02 15:41:40,181 | root | INFO | Swagger file not present
2021-02-02 15:41:40,181 | root | INFO | 404
127.0.0.1 - - [02/Feb/2021:15:41:40 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
2021-02-02 15:41:45,631 | root | INFO | Swagger file not present
2021-02-02 15:41:45,631 | root | INFO | 404
127.0.0.1 - - [02/Feb/2021:15:41:45 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"
###Markdown
Take a look at your workspace in [Azure Machine Learning Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.You can also retrieve the names of web services in your workspace by running the following code:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
diabetes-service
###Markdown
Use the web serviceWith the service deployed, now you can consume it from a client application.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
Patient: [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22]
diabetic
###Markdown
You can also send multiple patient observations to the service, and get back a prediction for each one.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
The code above uses the Azure Machine Learning SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure Machine Learning SDK, but simply make HTTP requests to the web service.Let's determine the URL to which these applications must submit their requests:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
http://a49ab466-2391-4a14-85a1-31af74a86b83.westeurope.azurecontainer.io/score
###Markdown
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON format, and receive back the predicted class(es).
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
Patient [2, 180, 74, 24, 21, 23.9091702, 1.488172308, 22] diabetic
Patient [0, 148, 58, 11, 179, 39.19207553, 0.160829008, 45] not-diabetic
###Markdown
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling token-based authentication. This would require REST requests to include an **Authorization** header. Delete the serviceWhen you no longer need your service, you should delete it to avoid incurring unecessary charges.
###Code
service.delete()
print ('Service deleted.')
###Output
Service deleted.
###Markdown
실시간 유추 서비스 만들기
학습시킨 예측 모델은 클라이언트가 새 데이터에서 예측 정보를 가져오는 데 사용할 수 있는 실시간 서비스로 배포할 수 있습니다. 작업 영역에 연결
이 Notebook의 작업을 시작하려면 먼저 작업 영역에 연결합니다.
> **참고**: Azure 구독에 인증된 세션을 아직 설정하지 않은 경우에는 링크를 클릭하고 인증 코드를 입력한 다음 Azure에 로그인하여 인증하라는 메시지가 표시됩니다.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
모델 학습 및 등록
이제 모델 학습과 등록을 진행하겠습니다.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
_____no_output_____
###Markdown
모델을 웹 서비스로 배포
당뇨 환자일 가능성을 기준으로 환자를 분류하는 기계 학습 모델의 학습과 등록을 완료했습니다. 프로덕션 환경에서는 당뇨 의심 환자만 당뇨 임상 시험 대상으로 지정해야 하는 수술 등에 이 모델을 사용할 수 있습니다. 이 시나리오를 지원하려는 경우 웹 서비스로 모델을 배포합니다.
먼저 작업 영역에 등록한 모델을 확인해 보겠습니다.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
이제 배포할 모델을 가져옵니다. 기본적으로는 모델 이름을 지정하면 최신 버전이 반환됩니다.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
_____no_output_____
###Markdown
이 모델을 호스트하는 웹 서비스를 만들려면 몇 가지 코드와 구성 파일이 필요합니다. 먼저 이러한 항목을 저장할 폴더를 만들겠습니다.
###Code
import os
# Create a folder for the deployment files
deployment_folder = './diabetes_service'
os.makedirs(deployment_folder, exist_ok=True)
print(deployment_folder, 'folder created.')
# Set path for scoring script
script_file = 'score_diabetes.py'
script_path = os.path.join(deployment_folder,script_file)
###Output
_____no_output_____
###Markdown
모델을 배포하는 웹 서비스에는 입력 데이터를 로드하고, 작업 영역에서 모델을 가져오고, 예측을 생성/반환하기 위한 특정 Python 코드가 필요합니다. 이 코드는 웹 서비스에 배포될 *항목 스크립트*(*채점 스크립트*라고도 함)에 저장됩니다.
스크립트는 두 함수로 구성됩니다.
- **init**: 이 함수는 서비스가 초기화되면 호출되며 일반적으로 모델을 로드하는 데 사용됩니다. 채점 스크립트는 **AZUREML_MODEL_DIR** 환경 변수를 사용하여 모델을 저장할 폴더를 결정합니다.
- **run**: 이 함수는 클라이언트 애플리케이션에서 새 데이터를 제출할 때마다 호출되며 일반적으로 모델의 예측 사항을 유추하는 데 사용됩니다.
###Code
%%writefile $script_path
import json
import joblib
import numpy as np
import os
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'diabetes_model.pkl')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
_____no_output_____
###Markdown
웹 서비스는 컨테이너에서 호스트되며, 이 컨테이너는 초기화 시에 필수 Python 종속성을 설치해야 합니다. 이 경우 채점 스크립트에는 **scikit-learn** 및 채점 웹 서비스에서 사용되는 일부 Azure Machine Learning 전용 패키지가 필요하기 때문에 이러한 항목이 포함된 환경을 만들어 보겠습니다. 그런 다음 해당 환경을 채점 스크립트와 함께 *유추 구성*에 추가하고 환경 및 스크립트가 호스트될 컨테이너에 대한 *배포 구성*을 정의해 보겠습니다.
그러면 이러한 구성을 바탕으로 모델을 서비스로 배포할 수 있습니다.
> **자세한 정보**: 모델 배포 및 대상 실행 환경용 옵션에 대한 자세한 내용은 [설명서](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where)를 참조하세요.
배포에서는 컨테이너 이미지를 만드는 프로세스를 먼저 실행한 다음 해당 이미지를 기반으로 웹 서비스를 만드는 프로세스를 실행하므로 시간이 다소 걸릴 수 있습니다. 배포가 정상적으로 완료되면 **정상** 상태가 표시됩니다.
###Code
from azureml.core import Environment
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
# Configure the scoring environment
service_env = Environment(name='service-env')
python_packages = ['scikit-learn', 'azureml-defaults', 'azure-ml-api-sdk']
for package in python_packages:
service_env.python.conda_dependencies.add_pip_package(package)
inference_config = InferenceConfig(source_directory=deployment_folder,
entry_script=script_file,
environment=service_env)
# Configure the web service container
deployment_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)
# Deploy the model as a service
print('Deploying model...')
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config, overwrite=True)
service.wait_for_deployment(True)
print(service.state)
###Output
_____no_output_____
###Markdown
배포가 정상적으로 진행되었다면 **정상** 상태를 확인할 수 있습니다. 정상 상태가 표시되지 않으면 다음 코드를 사용하여 서비스 로그를 가져와 문제 해결 시에 참조할 수 있습니다.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
_____no_output_____
###Markdown
[Azure Machine Learning Studio](https://ml.azure.com)의 작업 영역을 살펴보고 작업 영역에서 배포된 서비스가 표시되는 **엔드포인트** 페이지를 확인합니다.
다음 코드를 실행하여 작업 영역에서 웹 서비스 이름을 검색할 수도 있습니다.
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
_____no_output_____
###Markdown
웹 서비스 사용
배포한 서비스는 클라이언트 애플리케이션에서 사용할 수 있습니다.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
_____no_output_____
###Markdown
여러 환자를 관찰한 정보를 서비스로 전송한 후 각 환자에 대한 예측을 다시 가져올 수도 있습니다.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
위의 코드는 Azure Machine Learning SDK를 사용하여 컨테이너화된 웹 서비스에 연결한 다음 이 서비스를 사용하여 당뇨병 분류 모델에서 예측을 생성합니다. 프로덕션 환경에서는 Azure Machine Learning SDK를 사용하지 않으며 웹 서비스로의 HTTP 요청만 수행하는 비즈니스 애플리케이션이 모델을 사용할 가능성이 높습니다.
이러한 애플리케이션이 요청을 제출해야 하는 URL을 확인해 보겠습니다.
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
_____no_output_____
###Markdown
엔드포인트 URI가 확인되면 애플리케이션은 HTTP 요청을 수행하여 JSON 형식으로 환자 데이터를 전송한 다음 예측된 클래스를 다시 수신할 수 있습니다.
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
인증이 필요하지 않은 ACI(Azure Container Instance) 서비스로 웹 서비스를 배포했습니다. 개발 및 테스트 시에는 인증을 사용하지 않아도 되지만, 프로덕션 환경에서는 AKS(Azure Kubernetes Service) 클러스터에 서비스를 배포하고 토큰 기반 인증을 사용하도록 설정하는 것이 좋습니다. 이렇게 하려면 REST 요청에 **인증** 헤더를 포함해야 합니다.
서비스 삭제
더 이상 필요하지 않은 서비스는 불필요한 요금이 발생하지 않도록 삭제해야 합니다.
###Code
service.delete()
print ('Service deleted.')
###Output
_____no_output_____
###Markdown
リアルタイム推論サービスを作成する予測モデルのトレーニング後、クライアントが新しいデータから予測を取得するために使用できるリアルタイム サービスとしてモデルをデプロイできます。 ワークスペースに接続する作業を開始するには、ワークスペースに接続します。> **注**: Azure サブスクリプションでまだ認証済みのセッションを確立していない場合は、リンクをクリックして認証コードを入力し、Azure にサインインして認証するよう指示されます。
###Code
import azureml.core
from azureml.core import Workspace
# 保存された構成ファイルからワークスペースを読み込む
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
モデルをトレーニングして登録するそれでは、モデルをトレーニングして登録しましょう。
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# ワークスペースで Azure 実験を作成する
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# 糖尿病データセットを読み込む
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# 特徴とラベルを分離する
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# データをトレーニング セットとテスト セットに分割する
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# デシジョン ツリー モデルをトレーニングする
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# 精度を計算する
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# AUC を計算する
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# トレーニング済みモデルを保存する
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# 実行を完了する
run.complete()
# モデルを登録する
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
_____no_output_____
###Markdown
モデルを Web サービスとして公開する糖尿病の可能性に基づいて患者を分類する機械学習モデルをトレーニングし、登録しました。このモデルは、糖尿病の臨床検査を受ける必要があるとリスクがあると考えられる患者のみが必要な医師の手術などの運用環境で使用できます。このシナリオをサポートするには、モデルを Web サービスとしてデプロイします。まず、ワークスペースに登録したモデルを決定しましょう。
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
それでは、デプロイしたいモデルを取得しましょう。既定では、モデル名を指定すると、最新バージョンが返されます。
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
_____no_output_____
###Markdown
このモデルをホストする Web サービスを作成しますが、これにはコードと構成ファイルが必要です。そのため、それらのフォルダーを作成してみましょう。
###Code
import os
folder_name = 'diabetes_service'
# Web サービス ファイル用フォルダーを作成する
experiment_folder = './' + folder_name
os.makedirs(experiment_folder, exist_ok=True)
print(folder_name, 'folder created.')
# スコアリング スクリプトのパスを設定する
script_file = os.path.join(experiment_folder,"score_diabetes.py")
###Output
_____no_output_____
###Markdown
モデルをデプロイする Web サービスでは、入力データを読み込み、ワークスペースからモデルを取得し、予測を生成して返すために、Python コードが必要になります。このコードは、Web サービスにデプロイされる*エントリ スクリプト* (頻繁に*スコアリング スクリプト*と呼ばれます) に保存します。
###Code
%%writefile $script_file
import json
import joblib
import numpy as np
from azureml.core.model import Model
# サービスの読み込み時に呼び出される
def init():
global model
# デプロイ済みのモデル ファイルへのパスを取得して読み込む
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# 要求の受信時に呼び出される
def run(raw_data):
# 入力データを numpy 配列として取得する
data = np.array(json.loads(raw_data)['data'])
# モデルから予測を取得する
predictions = model.predict(data)
# 各予測に対応するクラス名を取得する (0 または 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# 予測を JSON 形式で返す
return json.dumps(predicted_classes)
###Output
_____no_output_____
###Markdown
Web サービスはコンテナーでホストされ、コンテナーは初期化されるときに必要な Python 依存関係をインストールする必要があります。この場合、スコアリング コードには **Scikit-learn** が必要なので、コンテナー ホストに環境にインストールするよう指示する .yml ファイルを作成します。
###Code
from azureml.core.conda_dependencies import CondaDependencies
# モデルの依存関係を追加する (AzureML の既定値は既に含まれています)
myenv = CondaDependencies()
myenv.add_conda_package('scikit-learn')
# 環境設定を .yml ファイルとして保存する
env_file = os.path.join(experiment_folder,"diabetes_env.yml")
with open(env_file,"w") as f:
f.write(myenv.serialize_to_string())
print("Saved dependency info in", env_file)
# .yml ファイルを印刷する
with open(env_file,"r") as f:
print(f.read())
###Output
_____no_output_____
###Markdown
これでデプロイする準備ができました。コンテナーに **diabetes-service** という名前のサービスをデプロイします。デプロイ プロセスには、次のステップが含まれます。1. モデルの読み込みと使用に必要なスコアリング ファイルと環境ファイルを含む推論構成を定義します。2. サービスをホストする実行環境を定義するデプロイメント構成を定義します。この場合、Azure Container Instances。3. モデルを Web サービスとしてデプロイする4. デプロイされたサービスの状態を確認します。> **詳細情報**: モデル デプロイ、ターゲット実行環境のオプションの詳細については、[ドキュメント](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where)を参照してください。デプロイは、最初にコンテナー イメージを作成するプロセスを実行し、そのイメージに基づいて Web サービスを作成するプロセスを実行するため、時間がかかります。デプロイが正常に完了すると、**正常**な状態が表示されます。
###Code
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# スコアリング環境を構成する
inference_config = InferenceConfig(runtime= "python",
entry_script=script_file,
conda_file=env_file)
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
###Output
_____no_output_____
###Markdown
うまくいけば、デプロイが成功し、**正常**な状態を確認できます。確認できない場合は、次のコードを使用して、トラブルシューティングに役立つサービス ログを取得できます。
###Code
print(service.get_logs())
# 変更を行って再デプロイする必要がある場合は、次のコードを使用して異常なサービスを削除することが必要となる可能性があります。
#service.delete()
###Output
_____no_output_____
###Markdown
[Azure Machine Learning Studio](https://ml.azure.com) でワークスペースを確認し、ワークスペースにデプロイされたサービスを示す**エンドポイント**ページを表示します。次のコードを実行して、ワークスペース内の Web サービスの名前を取得することもできます。
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
_____no_output_____
###Markdown
Web サービスを使用するサービスをデプロイしたら、クライアント アプリケーションからサービスを使用できます。
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# JSON ドキュメントでシリアル化可能なリストに配列を変換する
input_json = json.dumps({"data": x_new})
# 入力データを渡して Web サービスを呼び出す (Web サービスはバイナリ形式のデータも受け入れます)
predictions = service.run(input_data = input_json)
# 予測されたクラスを取得する - それは最初の (そして唯一の) クラスになります。
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
_____no_output_____
###Markdown
また、複数の患者の観察をサービスに送信し、それぞれの予測を取得することもできます。
###Code
import json
# 今回の入力は、2 つの特徴配列のひとつです。
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# JSON ドキュメント内のシリアル化可能なリストに配列を変換する
input_json = json.dumps({"data": x_new})
# Web サービスを呼び出して入力データを渡す
predictions = service.run(input_data = input_json)
# 予測されたクラスを取得する
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
上記のコードでは、Azure Machine Learning SDK を使用してコンテナー化された Web サービスに接続し、それを使用して糖尿病分類モデルから予測を生成しています。運用環境では、Azure Machine Learning SDK を使用せず、単に Web サービスに HTTP 要求を行うビジネス アプリケーションによってモデルが使用される可能性があります。これらのアプリケーションが要求を送信する必要がある URL を決定しましょう。
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
_____no_output_____
###Markdown
エンドポイント URI がわかったので、アプリケーションは HTTP 要求を行い、患者データを JSON 形式で送信し、予測されたクラスを受け取ることができます。
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# JSON ドキュメントでシリアル化可能なリストに配列を変換する
input_json = json.dumps({"data": x_new})
# コンテンツ タイプを設定する
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
認証を必要としない Azure Container Instances (ACI) サービスとして Web サービスをデプロイしました。これは開発とテストには適していますが、運用環境では Azure Kubernetes Service (AKS) クラスターへのデプロイとトークンベースの認証の有効化を検討する必要があります。これには、**Authorization** ヘッダーを含める REST 要求が必要です。 サービスの削除サービスが不要になった場合は、不要な料金が発生しないように削除する必要があります。
###Code
service.delete()
print ('Service deleted.')
###Output
_____no_output_____
###Markdown
실시간 유추 서비스 만들기학습시킨 예측 모델은 클라이언트가 새 데이터에서 예측 정보를 가져오는 데 사용할 수 있는 실시간 서비스로 배포할 수 있습니다. 작업 영역에 연결합니다.이 Notebook의 작업을 시작하려면 먼저 작업 영역에 연결합니다.> **참고**: Azure 구독에 인증된 세션을 아직 설정하지 않은 경우에는 링크를 클릭하고 인증 코드를 입력한 다음 Azure에 로그인하여 인증하라는 메시지가 표시됩니다.
###Code
import azureml.core
from azureml.core import Workspace
# 저장된 구성 파일에서 작업 영역 로드
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
모델 학습 및 등록이제 모델 학습과 등록을 진행하겠습니다.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# 작업 영역에서 Azure ML 실험 만들기
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# 당뇨병 데이터 세트 로드
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# 기능 및 레이블 분리
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# 데이터를 학습 세트와 테스트 세트로 분할
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# 의사 결정 트리 모델 학습 진행
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# 정확도 계산
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# AUC 계산
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# 학습된 모델 저장
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# 실행 완료
run.complete()
# 모델 등록
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
_____no_output_____
###Markdown
모델을 웹 서비스로 배포당뇨 환자일 가능성을 기준으로 환자를 분류하는 기계 학습 모델의 학습과 등록을 완료했습니다. 프로덕션 환경에서는 당뇨 의심 환자만 당뇨 임상 시험 대상으로 지정해야 하는 수술 등에 이 모델을 사용할 수 있습니다. 이 시나리오를 지원하려는 경우 웹 서비스로 모델을 배포합니다.먼저 작업 영역에 등록한 모델을 확인해 보겠습니다.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
이제 배포할 모델을 가져옵니다. 기본적으로는 모델 이름을 지정하면 최신 버전이 반환됩니다.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
_____no_output_____
###Markdown
이 모델을 호스트하는 웹 서비스를 만들려면 몇 가지 코드와 구성 파일이 필요합니다. 먼저 이러한 항목을 저장할 폴더를 만들겠습니다.
###Code
import os
folder_name = 'diabetes_service'
# 웹 서비스 파일용 폴더 만들기
experiment_folder = './' + folder_name
os.makedirs(experiment_folder, exist_ok=True)
print(folder_name, 'folder created.')
# 채점 스크립트용 경로 설정
script_file = os.path.join(experiment_folder,"score_diabetes.py")
###Output
_____no_output_____
###Markdown
모델을 배포하는 웹 서비스에는 입력 데이터를 로드하고, 작업 영역에서 모델을 가져오고, 예측을 생성/반환하기 위한 특정 Python 코드가 필요합니다. 웹 서비스에 배포할 *입력 스크립트*(대개 *채점 스크립트*라고 함)에 이 코드를 저장할 것입니다.
###Code
%%writefile $script_file
import json
import joblib
import numpy as np
from azureml.core.model import Model
# 서비스를 로드하면 호출됨
def init():
global model
# 배포된 모델 파일의 경로를 가져와서 로드
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# 요청이 수신되면 호출됨
def run(raw_data):
# 입력 데이터를 numpy 배열로 가져오기
data = np.array(json.loads(raw_data)['data'])
# 모델에서 예측 가져오기
predictions = model.predict(data)
# 각 예측(0 또는 1)에 해당하는 클래스 이름 가져오기
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# 예측을 JSON으로 반환
return json.dumps(predicted_classes)
###Output
_____no_output_____
###Markdown
웹 서비스는 컨테이너에서 호스트되며, 이 컨테이너는 초기화 시에 필수 Python 종속성을 설치해야 합니다. 여기서는 채점 코드에 **scikit-learn**이 필요하므로 환경에 scikit-learn을 설치하도록 컨테이너 호스트에 명령하는 .yml 파일을 만들겠습니다.
###Code
from azureml.core.conda_dependencies import CondaDependencies
# 모델의 종속성 추가(AzureML 기본값은 이미 포함되어 있음)
myenv = CondaDependencies()
myenv.add_conda_package('scikit-learn')
# 환경 구성을 .yml 파일로 저장
env_file = os.path.join(experiment_folder,"diabetes_env.yml")
with open(env_file,"w") as f:
f.write(myenv.serialize_to_string())
print("Saved dependency info in", env_file)
# .yml 파일 인쇄
with open(env_file,"r") as f:
print(f.read())
###Output
_____no_output_____
###Markdown
이제 배포를 진행할 수 있습니다. 컨테이너에 **diabetes-service** 서비스를 배포할 것입니다. 배포 프로세스에는 다음 단계가 포함됩니다.1. 모델을 로드하고 사용하는 데 필요한 채점 및 환경 파일을 포함하는 유추 구성을 정의합니다.2. 서비스를 호스트할 실행 환경을 정의하는 배포 구성을 정의합니다. 여기서 실행 환경은 Azure Container Instance입니다.3. 모델을 웹 서비스로 배포합니다.4. 배포된 서비스의 상태를 확인합니다.> **추가 정보**: 모델 배포 및 대상 실행 환경용 옵션에 대한 자세한 내용은 [설명서](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where)를 참조하세요.배포에서는 컨테이너 이미지를 만드는 프로세스를 먼저 실행한 다음 해당 이미지를 기반으로 웹 서비스를 만드는 프로세스를 실행하므로 시간이 다소 걸릴 수 있습니다. 배포가 정상적으로 완료되면 **정상** 상태가 표시됩니다.
###Code
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# 채점 환경 구성
inference_config = InferenceConfig(runtime= "python",
entry_script=script_file,
conda_file=env_file)
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
###Output
_____no_output_____
###Markdown
배포가 정상적으로 진행되었다면 **정상** 상태를 확인할 수 있습니다. 정상 상태가 표시되지 않으면 다음 코드를 사용하여 서비스 로그를 가져와 문제 해결 시에 참조할 수 있습니다.
###Code
print(service.get_logs())
# 서비스를 변경 및 재배포해야 하는 경우 다음 코드를 사용하여 비정상 서비스를 삭제해야 할 수 있습니다.
#service.delete()
###Output
_____no_output_____
###Markdown
[Azure Machine Learning Studio](https://ml.azure.com)의 작업 영역을 살펴보고 작업 영역에서 배포된 서비스가 표시되는 **엔드포인트** 페이지를 확인합니다.다음 코드를 실행하여 작업 영역에서 웹 서비스 이름을 검색할 수도 있습니다.
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
_____no_output_____
###Markdown
웹 서비스 사용배포한 서비스는 클라이언트 애플리케이션에서 사용할 수 있습니다.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# 배열을 JSON 문서의 직렬화 가능 목록으로 변환
input_json = json.dumps({"data": x_new})
# 웹 서비스를 호출하여 입력 데이터 전달(웹 서비스는 이진 형식 데이터도 수락함)
predictions = service.run(input_data = input_json)
# 예측된 클래스(첫 번째/유일한 클래스) 가져오기
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
_____no_output_____
###Markdown
여러 환자를 관찰한 정보를 서비스로 전송한 후 각 환자에 대한 예측을 다시 가져올 수도 있습니다.
###Code
import json
# 이번에는 입력이 기능 배열 2개로 구성된 배열입니다.
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# 배열 하나 이상을 JSON 문서의 직렬화 가능 목록으로 변환
input_json = json.dumps({"data": x_new})
# 웹 서비스를 호출하여 입력 데이터 전달
predictions = service.run(input_data = input_json)
# 예측된 클래스 가져오기
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
위의 코드는 Azure Machine Learning SDK를 사용하여 컨테이너화된 웹 서비스에 연결한 다음 이 서비스를 사용하여 당뇨병 분류 모델에서 예측을 생성합니다. 프로덕션 환경에서는 Azure Machine Learning SDK를 사용하지 않으며 웹 서비스로의 HTTP 요청만 수행하는 비즈니스 애플리케이션이 모델을 사용할 가능성이 높습니다.이러한 애플리케이션이 요청을 제출해야 하는 URL을 확인해 보겠습니다.
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
_____no_output_____
###Markdown
엔드포인트 URI가 확인되면 애플리케이션은 HTTP 요청을 수행하여 JSON 형식으로 환자 데이터를 전송한 다음 예측된 클래스를 다시 수신할 수 있습니다.
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# 배열을 JSON 문서의 직렬화 가능 목록으로 변환
input_json = json.dumps({"data": x_new})
# 콘텐츠 형식 설정
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
인증이 필요하지 않은 ACI(Azure Container Instance) 서비스로 웹 서비스를 배포했습니다. 개발 및 테스트 시에는 인증을 사용하지 않아도 되지만, 프로덕션 환경에서는 AKS(Azure Kubernetes Service) 클러스터에 서비스를 배포하고 토큰 기반 인증을 사용하도록 설정하는 것이 좋습니다. 이렇게 하려면 REST 요청에 **인증** 헤더를 포함해야 합니다. 서비스 삭제더 이상 필요하지 않은 서비스는 불필요한 요금이 발생하지 않도록 삭제해야 합니다.
###Code
service.delete()
print ('Service deleted.')
###Output
_____no_output_____
###Markdown
Create a real-time inferencing serviceAfter training a predictive model, you can deploy it as a real-time service that clients can use to get predictions from new data. Install the Azure Machine Learning SDKThe Azure Machine Learning SDK is updated frequently. Run the following cell to upgrade to the latest release, along with the additional package to support notebook widgets.
###Code
!pip install --upgrade azureml-sdk azureml-widgets
###Output
_____no_output_____
###Markdown
Connect to your workspaceWith the latest version of the SDK installed, now you're ready to connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Train and register a modelNow let's train and register a model.
###Code
from azureml.core import Experiment
from azureml.core import Model
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-train-diabetes")
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the diabetes dataset
print("Loading Data...")
diabetes = pd.read_csv('data/diabetes.csv')
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=model, filename=model_file)
run.upload_file(name = 'outputs/' + model_file, path_or_stream = './' + model_file)
# Complete the run
run.complete()
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Inline Training'},
properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
print('Model trained and registered.')
###Output
_____no_output_____
###Markdown
Deploy the model as a web serviceYou have trained and registered a machine learning model that classifies patients based on the likelihood of them having diabetes. This model could be used in a production environment such as a doctor's surgery where only patients deemed to be at risk need to be subjected to a clinical test for diabetes. To support this scenario, you will deploy the model as a web service.First, let's determine what models you have registered in the workspace.
###Code
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Right, now let's get the model that we want to deploy. By default, if we specify a model name, the latest version will be returned.
###Code
model = ws.models['diabetes_model']
print(model.name, 'version', model.version)
###Output
_____no_output_____
###Markdown
We're going to create a web service to host this model, and this will require some code and configuration files; so let's create a folder for those.
###Code
import os
folder_name = 'diabetes_service'
# Create a folder for the web service files
experiment_folder = './' + folder_name
os.makedirs(experiment_folder, exist_ok=True)
print(folder_name, 'folder created.')
# Set path for scoring script
script_file = os.path.join(experiment_folder,"score_diabetes.py")
###Output
_____no_output_____
###Markdown
The web service where we deploy the model will need some Python code to load the input data, get the model from the workspace, and generate and return predictions. We'll save this code in an *entry script* (often called a *scoring script*) that will be deployed to the web service:
###Code
%%writefile $script_file
import json
import joblib
import numpy as np
from azureml.core.model import Model
# Called when the service is loaded
def init():
global model
# Get the path to the deployed model file and load it
model_path = Model.get_model_path('diabetes_model')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Get the corresponding classname for each prediction (0 or 1)
classnames = ['not-diabetic', 'diabetic']
predicted_classes = []
for prediction in predictions:
predicted_classes.append(classnames[prediction])
# Return the predictions as JSON
return json.dumps(predicted_classes)
###Output
_____no_output_____
###Markdown
The web service will be hosted in a container, and the container will need to install any required Python dependencies when it gets initialized. In this case, our scoring code requires **scikit-learn**, so we'll create a .yml file that tells the container host to install this into the environment.
###Code
from azureml.core.conda_dependencies import CondaDependencies
# Add the dependencies for our model (AzureML defaults is already included)
myenv = CondaDependencies()
myenv.add_conda_package('scikit-learn')
# Save the environment config as a .yml file
env_file = os.path.join(experiment_folder,"diabetes_env.yml")
with open(env_file,"w") as f:
f.write(myenv.serialize_to_string())
print("Saved dependency info in", env_file)
# Print the .yml file
with open(env_file,"r") as f:
print(f.read())
###Output
_____no_output_____
###Markdown
Now you're ready to deploy. We'll deploy the container a service named **diabetes-service**. The deployment process includes the following steps:1. Define an inference configuration, which includes the scoring and environment files required to load and use the model.2. Define a deployment configuration that defines the execution environment in which the service will be hosted. In this case, an Azure Container Instance.3. Deploy the model as a web service.4. Verify the status of the deployed service.> **More Information**: For more details about model deployment, and options for target execution environments, see the [documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where).Deployment will take some time as it first runs a process to create a container image, and then runs a process to create a web service based on the image. When deployment has completed successfully, you'll see a status of **Healthy**.
###Code
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# Configure the scoring environment
inference_config = InferenceConfig(runtime= "python",
entry_script=script_file,
conda_file=env_file)
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "diabetes-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
###Output
_____no_output_____
###Markdown
Hopefully, the deployment has been successful and you can see a status of **Healthy**. If not, you can use the following code to get the service logs to help you troubleshoot.
###Code
print(service.get_logs())
# If you need to make a change and redeploy, you may need to delete unhealthy service using the following code:
#service.delete()
###Output
_____no_output_____
###Markdown
Take a look at your workspace in [Azure Machine Learning Studio](https://ml.azure.com) and view the **Endpoints** page, which shows the deployed services in your workspace.You can also retrieve the names of web services in your workspace by running the following code:
###Code
for webservice_name in ws.webservices:
print(webservice_name)
###Output
_____no_output_____
###Markdown
Use the web serviceWith the service deployed, now you can consume it from a client application.
###Code
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22]]
print ('Patient: {}'.format(x_new[0]))
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
predictions = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
predicted_classes = json.loads(predictions)
print(predicted_classes[0])
###Output
_____no_output_____
###Markdown
You can also send multiple patient observations to the service, and get back a prediction for each one.
###Code
import json
# This time our input is an array of two feature arrays
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = json.loads(predictions)
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
The code above uses the Azure Machine Learning SDK to connect to the containerized web service and use it to generate predictions from your diabetes classification model. In production, a model is likely to be consumed by business applications that do not use the Azure Machine Learning SDK, but simply make HTTP requests to the web service.Let's determine the URL to which these applications must submit their requests:
###Code
endpoint = service.scoring_uri
print(endpoint)
###Output
_____no_output_____
###Markdown
Now that you know the endpoint URI, an application can simply make an HTTP request, sending the patient data in JSON format, and receive back the predicted class(es).
###Code
import requests
import json
x_new = [[2,180,74,24,21,23.9091702,1.488172308,22],
[0,148,58,11,179,39.19207553,0.160829008,45]]
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Set the content type
headers = { 'Content-Type':'application/json' }
predictions = requests.post(endpoint, input_json, headers = headers)
predicted_classes = json.loads(predictions.json())
for i in range(len(x_new)):
print ("Patient {}".format(x_new[i]), predicted_classes[i] )
###Output
_____no_output_____
###Markdown
You've deployed your web service as an Azure Container Instance (ACI) service that requires no authentication. This is fine for development and testing, but for production you should consider deploying to an Azure Kubernetes Service (AKS) cluster and enabling token-based authentication. This would require REST requests to include an **Authorization** header. Delete the serviceWhen you no longer need your service, you should delete it to avoid incurring unecessary charges.
###Code
service.delete()
print ('Service deleted.')
###Output
_____no_output_____ |
IBM Professional Certificates/Data Analyst Capstone Project/1_Data_Collection/1-5_ Explore the Data Set.ipynb | ###Markdown
**Survey Dataset Exploration Lab** Estimated time needed: **30** minutes Objectives After completing this lab you will be able to: - Load the dataset that will used thru the capstone project.- Explore the dataset.- Get familier with the data types. Load the dataset Import the required libraries.
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
The dataset is available on the IBM Cloud at the below url.
###Code
dataset_url = "https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DA0321EN-SkillsNetwork/LargeData/m1_survey_data.csv"
###Output
_____no_output_____
###Markdown
Load the data available at dataset_url into a dataframe.
###Code
df = pd.read_csv(dataset_url)
###Output
_____no_output_____
###Markdown
Explore the data set It is a good idea to print the top 5 rows of the dataset to get a feel of how the dataset will look. Display the top 5 rows and columns from your dataset.
###Code
df.head()
###Output
_____no_output_____
###Markdown
Find out the number of rows and columns Start by exploring the numbers of rows and columns of data in the dataset. Print the number of rows in the dataset.
###Code
df.shape[0]
###Output
_____no_output_____
###Markdown
Print the number of columns in the dataset.
###Code
df.shape[1]
###Output
_____no_output_____
###Markdown
Identify the data types of each column Explore the dataset and identify the data types of each column. Print the datatype of all columns.
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
Print the mean age of the survey participants.
###Code
df["Age"].mean()
###Output
_____no_output_____
###Markdown
The dataset is the result of a world wide survey. Print how many unique countries are there in the Country column.
###Code
df["Country"].nunique()
###Output
_____no_output_____ |
gene-prediction-bin-bucket.ipynb | ###Markdown
- How many items are NaN in the is hk column?- How many items are known housekeeping genes?- How many items are known tissue specific genes?
###Code
print("NaN %s" % len(data[data["is_hk"].isnull()]))
print("Housekeeping %s" % len(data[data["is_hk"] == 1]))
print("Specific %s" % len(data[data["is_hk"] == 0]))
def split_train_test(data):
split = (int) (len(data) * 0.9)
return data[0:split], data[split:]
def split_data(data):
# Shuffle data
data = data.sample(frac = 1)
del data["EMBL_transcript_id"]
# train_set, test_set
hk_yes = data[data["is_hk"] == IS_HK]
hk_no = data[data["is_hk"] == IS_NOT_HK]
train_yes, test_yes = split_train_test(hk_yes)
train_no , test_no = split_train_test(hk_no)
train_set = train_yes
train_set = train_set.append(train_no)
train_set = train_set.sample(frac = 1)
test_set = test_yes
test_set = test_set.append(test_no)
test_set = test_set.sample(frac = 1)
# unsup_train_set
unsup_train_set = data[data["is_hk"].isnull()]
# sup_train_set
sup_train_set = data[data["is_hk"].notnull()]
return train_set, test_set, unsup_train_set, sup_train_set
train_set, test_set, unsup_train_set, sup_train_set = split_data(data)
def bin_plot(hist, bin_edge):
# make sure to import matplotlib.pyplot as plt
# plot the histogram
plt.figure(figsize=(6,4))
plt.fill_between(bin_edge.repeat(2)[1:-1],hist.repeat(2),facecolor="steelblue")
plt.show()
# plot the first 100 bins only
plt.figure(figsize=(6,4))
plt.fill_between(bin_edge.repeat(2)[1:100],hist.repeat(2)[1:100],facecolor="steelblue")
plt.show()
# plot the first 500 bins only
plt.figure(figsize=(6,4))
plt.fill_between(bin_edge.repeat(2)[1:500],hist.repeat(2)[1:500],facecolor="steelblue")
plt.show()
# remove NaN values
train_set_clength_no_nan = data["cDNA_length"][~np.isnan(data["cDNA_length"])]
# bin the data into 1000 equally spaced bins
# hist is the count for each bin
# bin_edge is the edge values of the bins
hist, bin_edge = np.histogram(train_set_clength_no_nan,1000)
bin_plot(hist, bin_edge)
###Output
_____no_output_____
###Markdown
How many bins have zero counts?
###Code
print("Total %s" % len(hist))
print("Zeros %s" % sum(hist == 0))
###Output
Total 1000
Zeros 823
###Markdown
**cDNA Density Plot**
###Code
train_set_clength_no_nan_sorted = data["cDNA_length"][data["cDNA_length"].notnull()].sort_values()
bin_edge = np.unique(train_set_clength_no_nan_sorted[0::70])
hist = np.bincount(np.digitize(train_set_clength_no_nan_sorted, bin_edge))
hist = hist[1:-1]
bin_plot(hist, bin_edge)
###Output
_____no_output_____
###Markdown
**CDS Density Plot**
###Code
train_set_clength_no_nan_sorted = data["cds_length"][data["cds_length"].notnull()].sort_values().values
bin_edge = np.unique(train_set_clength_no_nan_sorted[0::100])
hist = np.bincount(np.digitize(train_set_clength_no_nan_sorted, bin_edge))
hist = hist[1:-1]
bin_plot(hist, bin_edge)
###Output
_____no_output_____
###Markdown
Plot Raw Data
###Code
for feature in list(train_set):
if feature == "is_hk":
continue
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(12,4))
bin_size = 2 if feature in category_features else 500
X = train_set[train_set["is_hk"] == IS_HK][feature][~np.isnan(train_set[feature])]
hist, bin_edge = np.histogram(X, bin_size)
ax1.fill_between(bin_edge.repeat(2)[1:-1],hist.repeat(2),facecolor="orange")
ax1.set_title(feature + " (is_hk)")
X = train_set[train_set["is_hk"] == IS_NOT_HK][feature][~np.isnan(train_set[feature])]
hist, bin_edge = np.histogram(X, bin_size)
ax2.fill_between(bin_edge.repeat(2)[1:-1],hist.repeat(2),facecolor="steelblue")
ax2.set_title(feature + " (is_not_hk)")
plt.show()
###Output
_____no_output_____
###Markdown
MLE Distribution
###Code
mle_dist = {}
def bin_data_likelihood(train, sup_train, feature):
data = train[feature][train[feature].notnull()].sort_values()
bin_edge = np.unique(data[0::10])
sup_train = sup_train[sup_train[feature].notnull()]
sup_train_is_hk = sup_train[feature][sup_train["is_hk"] == 1]
sup_train_is_not_hk = sup_train[feature][sup_train["is_hk"] == 0]
hist_is_hk = np.bincount(np.digitize(sup_train_is_hk, bin_edge))
hist_is_not_hk = np.bincount(np.digitize(sup_train_is_not_hk, bin_edge))
hist_is_hk = hist_is_hk / len(sup_train_is_hk)
hist_is_not_hk = hist_is_not_hk / len(sup_train_is_not_hk)
hist_is_hk = np.append(hist_is_hk, np.zeros(len(bin_edge) + 1 - len(hist_is_hk)))
hist_is_not_hk = np.append(hist_is_not_hk, np.zeros(len(bin_edge) + 1 - len(hist_is_not_hk)))
return bin_edge, hist_is_hk, hist_is_not_hk
for feature in list(train_set):
if feature == "is_hk":
continue
bin_edge, hist_hk, hist_not_hk = bin_data_likelihood(train_set, sup_train_set, feature)
data = {
"bin_edge": bin_edge,
"is_hk": hist_hk,
"is_not_hk": hist_not_hk
}
mle_dist[feature] = data
plt.figure(figsize=(7,5))
plt.bar(np.arange(1, len(bin_edge) + 1), hist_hk[1:], alpha=0.85)
plt.bar(np.arange(1, len(bin_edge) + 1), hist_not_hk[1:], alpha=0.85, color="orange")
plt.legend(["is_hk", "is_not_hk"])
plt.title(feature)
plt.show()
###Output
_____no_output_____
###Markdown
Find Prior
###Code
prior = [0, 0]
prior[IS_HK] = len(sup_train_set[sup_train_set["is_hk"] == IS_HK]) / len(sup_train_set)
prior[IS_NOT_HK] = len(sup_train_set[sup_train_set["is_hk"] == IS_NOT_HK]) / len(sup_train_set)
print("Prior is_hk = %f, is_not_hk = %f" % (prior[IS_HK], prior[IS_NOT_HK]))
###Output
Prior is_hk = 0.133766, is_not_hk = 0.866234
###Markdown
MLE Prediction
###Code
def prob_category(x, ll, is_hk):
if x == 0:
return ll[is_hk]["prob_zero"]
else:
return 1 - ll[is_hk]["prob_zero"]
def predict(test_data):
global mle_dist
L = np.zeros(len(test_data))
for feature in list(test_data):
if feature in ["is_hk", "EMBL_transcript_id"]:
continue
data = test_data[feature]
not_null_idx = data.notnull()
p_house = mle_dist[feature]["is_hk"][np.digitize(data, mle_dist[feature]["bin_edge"])]
p_not_house = mle_dist[feature]["is_not_hk"][np.digitize(data, mle_dist[feature]["bin_edge"])]
L[not_null_idx] += np.log(p_house[not_null_idx] + 0.01)
L[not_null_idx] -= np.log(p_not_house[not_null_idx] + 0.01)
L += np.log(prior[IS_HK]) - np.log(prior[IS_NOT_HK])
return L
def activate_predict(y, threshold = 0.0):
return (y > threshold).astype(int)
def accuracy(y_test, y_pred):
return np.sum(y_test == y_pred) / len(y_test)
def precision(y_test, y_pred):
n_y_pred = np.sum(y_pred == 1)
return np.sum(np.logical_and(y_test == y_pred, y_pred == 1)) / (np.sum(y_pred == 1) + 1e-12)
# true positive rate
def recall(y_test, y_pred):
return np.sum(np.logical_and(y_test == y_pred, y_test == 1)) / (np.sum(y_test == 1) + 1e-12)
def false_positive_rate(y_test, y_pred):
return np.sum(np.logical_and(y_test != y_pred, y_test == 0)) / np.sum(y_test == 0)
def measure_metrics(y_test, y_pred):
print("Accuracy: %f" % accuracy(y_test, y_pred))
pcs = precision(y_test, y_pred)
rc = recall(y_test, y_pred)
print("Precision: %f" % pcs)
print("Recall: %f" % rc)
f1 = 2 * pcs * rc / (pcs + rc + 1e-12)
print("F1: %f" % f1)
y_test = test_set["is_hk"]
y_pred = activate_predict(predict(test_set))
measure_metrics(y_test, y_pred)
###Output
Accuracy: 0.974359
Precision: 0.846154
Recall: 1.000000
F1: 0.916667
###Markdown
Baseline 1\. Random Choice Baseline
###Code
def create_random_pred():
return np.random.random_sample((len(y_test),)) - 0.5
y_pred = activate_predict(create_random_pred())
measure_metrics(y_test, y_pred)
###Output
Accuracy: 0.435897
Precision: 0.076923
Recall: 0.272727
F1: 0.120000
###Markdown
2\. Majority
###Code
def create_majority_pred():
return np.ones(len(y_test)) * test_set["is_hk"].mode().values.astype(int)
y_pred = create_majority_pred()
measure_metrics(y_test, y_pred)
t = np.arange(-10,10,0.05)
t_best_acc = 0
t_best_f1 = 0
best_acc = -999
best_f1 = -999
for t_i in t:
y_pred = activate_predict(predict(test_set), threshold = t_i)
pcs = precision(y_test, y_pred)
rc = recall(y_test, y_pred)
f1 = 2 * pcs * rc / (pcs + rc + 1e-12)
if f1 > best_f1:
best_f1 = f1
t_best_f1 = t_i
acc = accuracy(y_test, y_pred)
if acc > best_acc:
best_acc = acc
t_best_acc = t_i
print("Best accuracy %f at threshold %f" % (best_acc, t_best_acc))
print("Best f1 %f at threshold %f" % (best_f1, t_best_f1))
###Output
Best accuracy 1.000000 at threshold 1.950000
Best f1 1.000000 at threshold 1.950000
###Markdown
RoC
###Code
t = np.arange(-10,10,0.1)
tp = []
tp_random = []
tp_majority = []
fp = []
fp_random = []
fp_majority = []
y_test = test_set["is_hk"]
y_pred = predict(test_set)
y_random = create_random_pred()
y_act_majority = create_majority_pred()
for t_i in t:
y_act_pred = activate_predict(y_pred, threshold = t_i)
y_act_random = activate_predict(y_random, threshold = t_i)
tp.append(recall(y_test, y_act_pred))
fp.append(false_positive_rate(y_test, y_act_pred))
tp_random.append(recall(y_test, y_act_random))
fp_random.append(false_positive_rate(y_test, y_act_random))
tp_majority.append(recall(y_test, y_act_majority))
fp_majority.append(false_positive_rate(y_test, y_act_majority))
plt.figure(figsize=(7,5))
plt.plot(fp_random, tp_random)
plt.plot(fp_majority, tp_majority)
plt.plot(fp, tp)
plt.legend(['Random', 'Majority', 'Naive Bayes'])
plt.show()
###Output
_____no_output_____
###Markdown
Solve Unsupervised Dataset
###Code
data["is_hk"] = activate_predict(predict(data))
data[["EMBL_transcript_id", "is_hk"]].to_csv("predict.csv", index=False)
###Output
_____no_output_____ |
02_Filtering_&_Sorting/Euro12/Exercises_with_Solutions.ipynb | ###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/jokecamp/FootballData/master/UEFA_European_Championship/Euro%202012/Euro%202012%20stats%20TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/jokecamp/FootballData/master/UEFA_European_Championship/Euro%202012/Euro%202012%20stats%20TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Team 16 non-null object
1 Goals 16 non-null int64
2 Shots on target 16 non-null int64
3 Shots off target 16 non-null int64
4 Shooting Accuracy 16 non-null object
5 % Goals-to-shots 16 non-null object
6 Total shots (inc. Blocked) 16 non-null int64
7 Hit Woodwork 16 non-null int64
8 Penalty goals 16 non-null int64
9 Penalties not scored 16 non-null int64
10 Headed goals 16 non-null int64
11 Passes 16 non-null int64
12 Passes completed 16 non-null int64
13 Passing Accuracy 16 non-null object
14 Touches 16 non-null int64
15 Crosses 16 non-null int64
16 Dribbles 16 non-null int64
17 Corners Taken 16 non-null int64
18 Tackles 16 non-null int64
19 Clearances 16 non-null int64
20 Interceptions 16 non-null int64
21 Clearances off line 15 non-null float64
22 Clean Sheets 16 non-null int64
23 Blocks 16 non-null int64
24 Goals conceded 16 non-null int64
25 Saves made 16 non-null int64
26 Saves-to-shots ratio 16 non-null object
27 Fouls Won 16 non-null int64
28 Fouls Conceded 16 non-null int64
29 Offsides 16 non-null int64
30 Yellow Cards 16 non-null int64
31 Red Cards 16 non-null int64
32 Subs on 16 non-null int64
33 Subs off 16 non-null int64
34 Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.5+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/murali0861/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/murali0861/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.5+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Team 16 non-null object
1 Goals 16 non-null int64
2 Shots on target 16 non-null int64
3 Shots off target 16 non-null int64
4 Shooting Accuracy 16 non-null object
5 % Goals-to-shots 16 non-null object
6 Total shots (inc. Blocked) 16 non-null int64
7 Hit Woodwork 16 non-null int64
8 Penalty goals 16 non-null int64
9 Penalties not scored 16 non-null int64
10 Headed goals 16 non-null int64
11 Passes 16 non-null int64
12 Passes completed 16 non-null int64
13 Passing Accuracy 16 non-null object
14 Touches 16 non-null int64
15 Crosses 16 non-null int64
16 Dribbles 16 non-null int64
17 Corners Taken 16 non-null int64
18 Tackles 16 non-null int64
19 Clearances 16 non-null int64
20 Interceptions 16 non-null int64
21 Clearances off line 15 non-null float64
22 Clean Sheets 16 non-null int64
23 Blocks 16 non-null int64
24 Goals conceded 16 non-null int64
25 Saves made 16 non-null int64
26 Saves-to-shots ratio 16 non-null object
27 Fouls Won 16 non-null int64
28 Fouls Conceded 16 non-null int64
29 Offsides 16 non-null int64
30 Yellow Cards 16 non-null int64
31 Red Cards 16 non-null int64
32 Subs on 16 non-null int64
33 Subs off 16 non-null int64
34 Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.5+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12['Team'].asin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____
###Markdown
Ex2 - Filtering and Sorting Data This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
###Code
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',')
euro12
###Output
_____no_output_____
###Markdown
Step 4. Select only the Goal column.
###Code
euro12.Goals
###Output
_____no_output_____
###Markdown
Step 5. How many team participated in the Euro2012?
###Code
euro12.shape[0]
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
euro12.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 35 columns):
Team 16 non-null object
Goals 16 non-null int64
Shots on target 16 non-null int64
Shots off target 16 non-null int64
Shooting Accuracy 16 non-null object
% Goals-to-shots 16 non-null object
Total shots (inc. Blocked) 16 non-null int64
Hit Woodwork 16 non-null int64
Penalty goals 16 non-null int64
Penalties not scored 16 non-null int64
Headed goals 16 non-null int64
Passes 16 non-null int64
Passes completed 16 non-null int64
Passing Accuracy 16 non-null object
Touches 16 non-null int64
Crosses 16 non-null int64
Dribbles 16 non-null int64
Corners Taken 16 non-null int64
Tackles 16 non-null int64
Clearances 16 non-null int64
Interceptions 16 non-null int64
Clearances off line 15 non-null float64
Clean Sheets 16 non-null int64
Blocks 16 non-null int64
Goals conceded 16 non-null int64
Saves made 16 non-null int64
Saves-to-shots ratio 16 non-null object
Fouls Won 16 non-null int64
Fouls Conceded 16 non-null int64
Offsides 16 non-null int64
Yellow Cards 16 non-null int64
Red Cards 16 non-null int64
Subs on 16 non-null int64
Subs off 16 non-null int64
Players Used 16 non-null int64
dtypes: float64(1), int64(29), object(5)
memory usage: 4.4+ KB
###Markdown
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
###Code
# filter only giving the column names
discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discipline
###Output
_____no_output_____
###Markdown
Step 8. Sort the teams by Red Cards, then to Yellow Cards
###Code
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
###Output
_____no_output_____
###Markdown
Step 9. Calculate the mean Yellow Cards given per Team
###Code
round(discipline['Yellow Cards'].mean())
###Output
_____no_output_____
###Markdown
Step 10. Filter teams that scored more than 6 goals
###Code
euro12[euro12.Goals > 6]
###Output
_____no_output_____
###Markdown
Step 11. Select the teams that start with G
###Code
euro12[euro12.Team.str.startswith('G')]
###Output
_____no_output_____
###Markdown
Step 12. Select the first 7 columns
###Code
# use .iloc to slices via the position of the passed integers
# : means all, 0:7 means from 0 to 7
euro12.iloc[: , 0:7]
###Output
_____no_output_____
###Markdown
Step 13. Select all columns except the last 3.
###Code
# use negative to exclude the last 3 columns
euro12.iloc[: , :-3]
###Output
_____no_output_____
###Markdown
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
###Code
# .loc is another way to slice, using the labels of the columns and indexes
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
###Output
_____no_output_____ |
Notebook/ModelComparision/ML Model Comparision.ipynb | ###Markdown
Import Data Set
###Code
import boto3
import pandas as pd
import os
from configparser import ConfigParser
from smart_open import smart_open
config = ConfigParser()
config_file = ('config.ini')
config.read(config_file)
default = config['aws.data']
aws_key = default['accessKey']
aws_secret = default['secretAccessKey']
bucket_name = 'texttoxicity-train-test'
object_key = 'train.csv'
object_key_train = 'train.csv'
object_key_test ='test.csv'
path_train = 's3://{}:{}@{}/{}'.format(aws_key, aws_secret, bucket_name, object_key_train)
path_test = 's3://{}:{}@{}/{}'.format(aws_key, aws_secret, bucket_name, object_key_test)
train = pd.read_csv(smart_open(path_train))
test =pd.read_csv(smart_open(path_test))
train.head()
###Output
_____no_output_____
###Markdown
Feature Extraction
###Code
train['total_length'] = train['comment_text'].apply(len)
train['capitals'] = train['comment_text'].apply(lambda comment: sum(1 for c in comment if c.isupper()))
train['caps_vs_length'] = train.apply(lambda row: float(row['capitals'])/float(row['total_length']),axis=1)
train['num_exclamation_marks'] = train['comment_text'].apply(lambda comment: comment.count('!'))
train['num_question_marks'] = train['comment_text'].apply(lambda comment: comment.count('?'))
train['num_punctuation'] = train['comment_text'].apply(lambda comment: sum(comment.count(w) for w in '.,;:'))
train['num_symbols'] = train['comment_text'].apply(lambda comment: sum(comment.count(w) for w in '*&$%'))
train['num_words'] = train['comment_text'].apply(lambda comment: len(comment.split()))
train['num_unique_words'] = train['comment_text'].apply(lambda comment: len(set(w for w in comment.split())))
train['words_vs_unique'] = train['num_unique_words'] / train['num_words']
train['num_smilies'] = train['comment_text'].apply(lambda comment: sum(comment.count(w) for w in (':-)', ':)', ';-)', ';)')))
features = ('total_length', 'capitals', 'caps_vs_length', 'num_exclamation_marks','num_question_marks', 'num_punctuation', 'num_words', 'num_unique_words','words_vs_unique', 'num_smilies', 'num_symbols')
columns = ('target', 'severe_toxicity', 'obscene', 'identity_attack', 'insult', 'threat', 'funny', 'wow', 'sad', 'likes', 'disagree', 'sexual_explicit','identity_annotator_count', 'toxicity_annotator_count')
rows = [{c:train[f].corr(train[c]) for c in columns} for f in features]
train_correlations = pd.DataFrame(rows, index=features)
train_correlations
train.fillna(0,inplace=True)
train.target = train.target.apply(lambda x: 1 if x>0.45 else 0)
from string import punctuation
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
train['comment_text'] = train.comment_text.apply(lambda x: x.lower())
train['cleaned_comment'] = train.comment_text.apply(lambda x: word_tokenize(x))
train['cleaned_comment'] = train.cleaned_comment.apply(lambda x: [w for w in x if w not in stop_words])
train['cleaned_comment'] = train.cleaned_comment.apply(lambda x: ' '.join(x))
train.drop('comment_text',axis=1,inplace=True)
###Output
_____no_output_____
###Markdown
Modelling
###Code
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
import numpy as np
#traget variable
y = train.target
#test-triain split
X_train, X_test, y_train, y_test = train_test_split(train, y, test_size=0.33,random_state=53)
# Initialize a CountVectorizer object: count_vectorizer
count_vectorizer = CountVectorizer(stop_words="english")
count_train = count_vectorizer.fit_transform(X_train["cleaned_comment"])
y_train = np.asarray(y_train.values)
# Pick up the most effective words
ch2 = SelectKBest(chi2, k = 300)
X_new = ch2.fit_transform(count_train, y_train)
# Transform the test data using only the 'text' column values: count_test
count_test = count_vectorizer.transform(X_test["cleaned_comment"])
X_test_new = ch2.transform(X=count_test)
###Output
_____no_output_____
###Markdown
1. Naive Bayes Multinomial Naive Bayes is a specialized version of Naive Bayes that is designed more for text documents. Whereas simple naive Bayes would model a document as the presence and absence of particular words, multinomial naive bayes explicitly models the word counts and adjusts the underlying calculations to deal with in.It estimates the conditional probability of a particular word given a class as the relative frequency of term t in documents belonging to class(c). The variation takes into account the number of occurrences of term t in training documents from class (c),including multiple occurrences.
###Code
from sklearn.naive_bayes import MultinomialNB
clf = MultinomialNB()
# Fit the classifier to the training data
clf.fit(X_new, y_train)
# Create the predicted tags: pred
pred_nb = clf.predict(X_test_new)
###Output
_____no_output_____
###Markdown
2. Decision Tree Classifier The general motive of using Decision Tree is to create a training model which can use to predict class or value of target variables by learning decision rules inferred from prior data(training data).A decision tree is a flowchart-like tree structure where an internal node represents feature(or attribute), the branch represents a decision rule, and each leaf node represents the outcome. The topmost node in a decision tree is known as the root node. It learns to partition on the basis of the attribute value. It partitions the tree in recursively manner call recursive partitioning. This flowchart-like structure helps you in decision making. It's visualization like a flowchart diagram which easily mimics the human level thinking. That is why decision trees are easy to understand and interpret.
###Code
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
# Fit the classifier to the training data
clf.fit(X_new, y_train)
# Create the predicted tags: pred
pred_dt = clf.predict(X_test_new)
###Output
_____no_output_____
###Markdown
3. Random Forest The random forest is a model made up of many decision trees. Rather than just simply averaging the prediction of trees (which we could call a “forest”), this model uses two key concepts that gives it the name random:1.Random sampling of training data points when building trees.2.Random subsets of features considered when splitting node.
###Code
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier()
# Fit the classifier to the training data
clf.fit(X_new, y_train)
# Create the predicted tags: pred
pred_rf = clf.predict(X_test_new)
###Output
C:\Users\HarshithaGS\AppData\Roaming\Python\Python37\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
###Markdown
4.Logistic Regression The logistic regression model computes a weighted sum of the input variables similar to the linear regression, but it runs the result through a special non-linear function, the logistic function or sigmoid function to produce the output y.
###Code
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
# Fit the classifier to the training data
clf.fit(X_new, y_train)
# Create the predicted tags: pred
pred_lr = clf.predict(X_test_new)
###Output
C:\Users\HarshithaGS\AppData\Roaming\Python\Python37\site-packages\sklearn\linear_model\logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
###Markdown
Model Comparison
###Code
from sklearn import metrics
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import log_loss
from sklearn.pipeline import Pipeline
from sklearn.metrics import roc_curve, auc
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.model_selection import StratifiedKFold
from scipy import interp
###Output
_____no_output_____
###Markdown
1.Comparing confusion matrices of all models
###Code
# We use ax parameter to tell seaborn which subplot to use for this plot
print('Confusion Matrix of Naive Bayes')
sns.heatmap(metrics.confusion_matrix(pred_nb,y_test),annot=True,fmt='2.0f')
print('Confusion Matrix of Decision Tree')
sns.heatmap(metrics.confusion_matrix(pred_dt,y_test),annot=True,fmt='2.0f')
print('Confusion Matrix of Random Forest')
sns.heatmap(metrics.confusion_matrix(pred_rf,y_test),annot=True,fmt='2.0f')
print('Confusion Matrix of Logistic Regression')
sns.heatmap(metrics.confusion_matrix(pred_lr,y_test),annot=True,fmt='2.0f')
###Output
Confusion Matrix of Logistic Regression
###Markdown
2.Comparing Accuracy score of all models
###Code
score = metrics.accuracy_score(y_test, pred_nb)
print('Accuracy of Naive Bayes Model is:',score)
score = metrics.accuracy_score(y_test, pred_dt)
print('Accuracy of Decision Tree Model is:',score)
score = metrics.accuracy_score(y_test, pred_rf)
print('Accuracy of Random Forest Model is:',score)
score = metrics.accuracy_score(y_test, pred_lr)
print('Accuracy of Logistic Regression Model is:',score)
###Output
Accuracy of Naive Bayes Model is: 0.9341782948209312
Accuracy of Decision Tree Model is: 0.9270779991571652
Accuracy of Random Forest Model is: 0.9339029463960417
Accuracy of Logistic Regression Model is: 0.9375529919796376
###Markdown
3. Comparing F1-score of all models
###Code
f1 = metrics.f1_score(y_test, pred_nb)
print('F1 score of Naive Bayes Model is:',score)
f1 = metrics.f1_score(y_test, pred_dt)
print('F1 score of Decision Tree Model is:',score)
f1 = metrics.f1_score(y_test, pred_rf)
print('F1 score of Random Forest Model is:',score)
f1 = metrics.f1_score(y_test, pred_lr)
print('F1 score of Logistic Regression Model is:',score)
###Output
F1 score of Naive Bayes Model is: 0.9375529919796376
F1 score of Decision Tree Model is: 0.9375529919796376
F1 score of Random Forest Model is: 0.9375529919796376
F1 score of Logistic Regression Model is: 0.9375529919796376
###Markdown
4. Comparing Log loss of all models
###Code
loss = log_loss(y_test,pred_nb)
print('Log loss of Naive Bayes model is :' ,loss)
loss = log_loss(y_test,pred_dt)
print('Log loss of Decision Tree model is :' ,loss)
loss = log_loss(y_test,pred_rf)
print('Log loss of Random Forest model is :' ,loss)
loss = log_loss(y_test,pred_lr)
print('Log loss of Logistic Regression model is :' ,loss)
###Output
Log loss of Naive Bayes model is : 2.273413561692878
Log loss of Decision Tree model is : 2.5186577517160136
Log loss of Random Forest model is : 2.2829269437503528
Log loss of Logistic Regression model is : 2.1568526479840164
###Markdown
5. Comparing AUC-ROC of all models
###Code
from scipy import interp
fpr, tpr, thresholds = roc_curve(y_test, pred_nb)
mean_tpr += interp(mean_fpr, fpr, tpr)
mean_tpr[0] = 0.0
roc_auc = auc(fpr, tpr)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
plt.plot(mean_fpr,
mean_tpr,
color='#1947D1',
linestyle='--',
label='(ROC AUC = %0.2f)' % (mean_auc),
lw=2
)
plt.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6), label='Random Guessing')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Multinomial NB')
plt.legend(loc="lower right")
plt.savefig('roc_maxfeatures.eps', dpi=300)
plt.show()
from scipy import interp
fpr, tpr, thresholds = roc_curve(y_test, pred_dt)
mean_tpr += interp(mean_fpr, fpr, tpr)
mean_tpr[0] = 0.0
roc_auc = auc(fpr, tpr)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
plt.plot(mean_fpr,
mean_tpr,
color='#1947D1',
linestyle='--',
label='(ROC AUC = %0.2f)' % (mean_auc),
lw=2
)
plt.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6), label='Random Guessing')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Decision Tree')
plt.legend(loc="lower right")
plt.savefig('roc_maxfeatures.eps', dpi=300)
plt.show()
from scipy import interp
fpr, tpr, thresholds = roc_curve(y_test, pred_rf)
mean_tpr += interp(mean_fpr, fpr, tpr)
mean_tpr[0] = 0.0
roc_auc = auc(fpr, tpr)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
plt.plot(mean_fpr,
mean_tpr,
color='#1947D1',
linestyle='--',
label='(ROC AUC = %0.2f)' % (mean_auc),
lw=2
)
plt.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6), label='Random Guessing')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Random Forest')
plt.legend(loc="lower right")
plt.savefig('roc_maxfeatures.eps', dpi=300)
plt.show()
from scipy import interp
fpr, tpr, thresholds = roc_curve(y_test, pred_lr)
mean_tpr += interp(mean_fpr, fpr, tpr)
mean_tpr[0] = 0.0
roc_auc = auc(fpr, tpr)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
plt.plot(mean_fpr,
mean_tpr,
color='#1947D1',
linestyle='--',
label='(ROC AUC = %0.2f)' % (mean_auc),
lw=2
)
plt.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6), label='Random Guessing')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Logistic Regression')
plt.legend(loc="lower right")
plt.savefig('roc_maxfeatures.eps', dpi=300)
plt.show()
###Output
_____no_output_____ |
notebooks/ctx-affix-crf-coarse-analysis.ipynb | ###Markdown
Contextual affix CRF coarse-grained experiments analysis
###Code
from collections import defaultdict
import os
import pprint
from pymongo import MongoClient
from scipy.stats import f_oneway, ttest_ind
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
plt.style.use('ggplot')
%matplotlib inline
client = MongoClient(os.environ['SACRED_MONGO_URL'])
db = client[os.environ['SACRED_DB_NAME']]
run_criteria = {
'experiment.name': 'id-pos-tagging-ctx-affix-crf-coarse',
'meta.command': 'evaluate',
'status': 'COMPLETED',
}
db.runs.count(run_criteria)
data = defaultdict(list)
for run in db.runs.find(run_criteria):
data['run_id'].append(run['_id'])
for conf in 'c2 min_freq use_prefix use_suffix use_wordshape window'.split():
data[conf].append(run['config'][conf])
metric = db.metrics.find_one({'run_id': run['_id'], 'name': 'f1'})
if metric is not None:
if len(metric['values']) != 1:
print(f"run {run['_id']} metric f1 has length != 1, taking the last one")
data['f1'].append(metric['values'][-1])
df = pd.DataFrame(data)
len(df)
df.head()
###Output
_____no_output_____
###Markdown
The F1 score is from the dev set. Analyzing binary variables use_prefix
###Code
df.boxplot(column='f1', by='use_prefix', figsize=(12, 8))
###Output
_____no_output_____
###Markdown
It seems clear that `use_prefix=True` is better than `use_prefix=False`. use_suffix
###Code
df.boxplot(column='f1', by='use_suffix', figsize=(12, 8))
ttest_ind(df[df.use_suffix]['f1'], df[~df.use_suffix]['f1'])
###Output
_____no_output_____
###Markdown
It seems clear as well that `use_suffix=True` is better than `use_suffix=False`. use_wordshape
###Code
df.boxplot(column='f1', by='use_wordshape', figsize=(12, 8))
ttest_ind(df[df.use_wordshape]['f1'], df[~df.use_wordshape]['f1'])
###Output
_____no_output_____
###Markdown
It seems that wordshape is not a useful feature. It is better to set `use_wordshape=False`. Analyzing multinomial variables min_freq
###Code
df.boxplot(column='f1', by='min_freq', figsize=(12, 8))
samples = []
for min_freq in df.min_freq.unique():
samples.append(df[df.min_freq == min_freq]['f1'])
f_oneway(*samples)
###Output
_____no_output_____
###Markdown
There seems no difference among different values for `min_freq`. So, maybe we'll just use the default value of 1. window
###Code
df.boxplot(column='f1', by='window', figsize=(12, 8))
samples = []
for window in df.window.unique():
samples.append(df[df.window == window]['f1'])
f_oneway(*samples)
###Output
_____no_output_____
###Markdown
There is significant difference when varying `window` value as the p-value is lower than 0.05. But from the boxplot, it is not clear what the best range is. Thus, we'll keep this range for random search. Analyzing continuous variables c2
###Code
df['log10_c2'] = np.log10(df.c2)
df.head()
df.plot.scatter(x='log10_c2', y='f1')
###Output
_____no_output_____ |
Actividad_5/Quiz.ipynb | ###Markdown
Quiz Reglas de Integración Instrucciones: Puede realizar el código en notebook o puede utilizar el editor de textos de su preferencia, pero debe someter la función de su código en este notebook
###Code
import numpy as np
from test import *
###Output
_____no_output_____
###Markdown
1. Integración por sumas de Riemann:Una función a integrar $f(x)$ en un intervalo $[a,b]$ puede aproximarse a una suma finita del área de $n$ rectángulos, que por facilidad se supondrán que tendrán el mismo ancho de la base $\Delta x$. La aproximación tiene la siguiente forma:$$\int_{a}^{b}f(x)dx=\sum_{i=0}^{n-1}f(x_{i})\Delta x $$Con $\Delta x=\frac{b-a}{n}$, $a\leq x_{i}\leq b$ y podemos escoger los $x_{i}$ de tal forma que:$$x_{i}=a+i\Delta x, $$Con $i=0,..,n-1$* Cree una función en python llamada integral_riemann que integre por sumas de Riemann la función $e^{-x^2}$ entre 0 y 1. Tome las iteraciones necesarias.
###Code
def integral_riemann():
return 1.25
X=np.linspace(1,2,21)
X
#¿el código es correcto? no borrar esta línea, ingresar los mismos valores de entrada de su
#función
test1(1,2,3)
#Esto es lo que está adentro del assert:
assert np.abs(integral_riemann()-0.746824132812427)<1e-6
###Output
Resultado Incorrecto
###Markdown
2. Método del trapecioEn lugar de aproximar el área por la suma de rectángulos lo haremos por sumas de trapecios, en donde la parte superior de cada trapecio se aproxima como una recta. Después de hacer la suma de las áreas de los trapecios se obtiene la fórmula:$$\int_{a}^{b}f(x)dx=\Delta x \left(\frac{y_0}{2}+\sum_{i=1}^{n-1}f(x_{i})+\frac{y_{n}}{2} \right)$$Con $\Delta x=\frac{b-a}{n}$, $a\leq x_{i}\leq b$ y podemos escoger los $x_{i}$ de tal forma que:$$x_{i}=a+i\Delta x, $$Con $i=0,..,n-1$* Cree una función en python llamada integral_trapecio que integre por la regla del trapecio la función $e^{-x^2}$ entre 0 y 1.
###Code
def integral_trapecio():
return
#¿el código es correcto? no borrar esta línea, ingresar los mismos valores den entrada de su
#función
test2()
###Output
Resultado Incorrecto
|
Session_2/1_pmod_grove_tmp.ipynb | ###Markdown
Grove Temperature Sensor example----* [Introduction](Introduction)* [Setup the board](Setup-the-board)* [Setup the sensor](Setup-the-sensor)* [Read from the sensor](Read-from-the-sensor)* [Display a graph](Display-a-graph)---- IntroductionThe PYNQ-Z1 and PYNQ-Z2 boards have two Pmod ports and an Arduino interface. The PYNQ-Z2 also has a Raspberry Pi interface. A number of Pmod, Grove, and Peripherals are supported by PYNQ. Pmods can be plugged directly into the Pmod port. Grove Peripherals can be connected to the Pmod Port or Arduino header through adapter boards.The external pins of these interfaces are connected to PL pins. This means the logic to control an external peripheral must be implemented in the PL in an Overlay. Pmods, Grove and Arduino peripherals can be used with IOPs in the *base* Overlay for the PYNQ-Z1 and PYNQ-Z2. This notebook will show how to use the [Grove Temperature Sensor v1.2](http://www.seeedstudio.com/wiki/Grove_-_Temperature_Sensor_V1.2) with the Grove ADC [Grove Temperature Sensor v1.2](http://wiki.seeedstudio.com/Grove-I2C_ADC/) on the PYNQ-Z1 or PYNQ-Z2 board. The Grove Temperature sensor produces an analog signal, and requires an ADC. You will also see how to plot a graph using _matplotlib_, a Python package for 2D plots. A Grove Temperature sensor, a Grove ADC, and a Pynq Grove Adapter Adapter are required for this notebook example (a Pynq Arduino adapter could also be used instead of the Pynq Grove Adapter).The driver for the Temperature sensor running on the IOP supports reading a single value of temperature, or reading and logging of multiple values at regular intervals. ---- Setup the boardStart by loading the Base Overlay.
###Code
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
###Output
_____no_output_____
###Markdown
Setup the sensor1. Connect the ***pmod2grove*** to ***PMODB***. 2. Connect ***Grove ADC*** port ***J1*** (SCL, SDA, VCC, GND) to port ***G4*** of the Pynq Grove Adapter. 3. Connect the ***Grove TMP*** to port ***J2*** of the ***Grove ADC*** (GND, VCC, NC, SIG) Create an instance of the sensorThe sensor is connected to the ADC. You will create an instance of the temperature sensor. The Grove ADC is connected to the board through the Pynq Grove adapter. This can be connected to either of the Pmod ports. The Grove ADC is an I2C peripheral. I2C requires pull-up pins on the FPGA. In the base overlay, these pins are only available on ports G3 or G4 of the Pynq Grove adapter, so the ADC must be connected to one of these ports. The Pmod port (PMODA, or PMODB), and the pins on the adapter are specified when the instance is created.
###Code
import math
from pynq.lib.pmod import Grove_TMP
from pynq.lib.pmod import PMOD_GROVE_G4 # import constants
# Grove2pmod is connected to PMODB (2)
# Grove ADC is connected to G4 (pins [6,2])
tmp = Grove_TMP(base.PMODB, PMOD_GROVE_G4)
###Output
_____no_output_____
###Markdown
Read from the sensorInternally, the Grove ADC provides a raw sample which is the resistance of the sensor. In the IOP, this value is converted into a temperature value.
###Code
temperature = tmp.read()
print(float("{0:.2f}".format(temperature)),'degree Celsius')
###Output
_____no_output_____
###Markdown
You can run the cell above a number of times. Start logging once every 100ms for 10 secondsExecuting the next cell will start logging the temperature sensor values every 100ms, and will run for 10s. You can try touch/hold the temperature sensor to vary the measured temperature.You can vary the logging interval and the duration by changing the values in the cell below. The raw samples are stored in the internal memory, and converted into temperature values.
###Code
import time
ms_delay = 100
delay_s = 10
tmp.set_log_interval_ms(ms_delay)
tmp.start_log()
time.sleep(delay_s) # Change input during this time
tmp_log = tmp.get_log()
###Output
_____no_output_____
###Markdown
---- Display a graphUse matplotlib to display a graph of the temperature sensor data.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(range(len(tmp_log)), tmp_log, 'ro')
plt.title('Grove Temperature Plot')
min_tmp_log = min(tmp_log)
max_tmp_log = max(tmp_log)
plt.axis([0, len(tmp_log), min_tmp_log, max_tmp_log])
plt.show()
###Output
_____no_output_____
###Markdown
Grove Temperature Sensor example----* [Introduction](Introduction)* [Setup the board](Setup-the-board)* [Setup the sensor](Setup-the-sensor)* [Read from the sensor](Read-from-the-sensor)* [Display a graph](Display-a-graph)---- IntroductionThe PYNQ-Z1 and PYNQ-Z2 boards have two Pmod ports and an Arduino interface. The PYNQ-Z2 also has a Raspberry Pi interface. A number of Pmod, Grove, and Peripherals are supported by PYNQ. Pmods can be plugged directly into the Pmod port. Grove Peripherals can be connected to the Pmod Port or Arduino header through adapter boards.The external pins of these interfaces are connected to PL pins. This means the logic to control an external peripheral must be implemented in the PL in an Overlay. Pmods, Grove and Arduino peripherals can be used with IOPs in the *base* Overlay for the PYNQ-Z1 and PYNQ-Z2. This notebook will show how to use the [Grove Temperature Sensor v1.2](http://wiki.seeedstudio.com/Grove-Temperature_Sensor_V1.2/) with the Grove I2C ADC [Grove I2C ADC ](http://wiki.seeedstudio.com/Grove-I2C_ADC/) on the PYNQ-Z1 or PYNQ-Z2 board. The Grove Temperature sensor produces an analog signal, and requires an Analog to Digital Converter (ADC). You will also see how to plot a graph using _matplotlib_, a Python package for 2D plots. A Grove Temperature sensor, a Grove ADC, and a Pynq Grove Adapter Adapter are required for this notebook example (a Pynq Arduino adapter could also be used instead of the Pynq Grove Adapter).The driver for the Temperature sensor running on the IOP supports reading a single value of temperature, or reading and logging of multiple values at regular intervals. ---- Setup the boardStart by loading the Base Overlay.
###Code
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
###Output
_____no_output_____
###Markdown
Setup the sensor1. Connect the ***pmod2grove*** to ***PMODB***. 2. Connect ***Grove I2C ADC*** port ***J1*** (SCL, SDA, VCC, GND) to port ***G4*** of the Pynq Grove Adapter. 3. Connect the ***Grove TMP*** to port ***J2*** of the ***Grove ADC*** (GND, VCC, NC, SIG) Create an instance of the sensorThe sensor is connected to the ADC. You will create an instance of the temperature sensor. The Grove ADC is connected to the board through the Pynq Grove adapter. This can be connected to either of the Pmod ports. The Grove ADC is an I2C peripheral. I2C requires pull-up pins on the FPGA. In the base overlay, these pins are only available on ports G3 or G4 of the Pynq Grove adapter, so the ADC must be connected to one of these ports. The Pmod port (PMODA, or PMODB), and the pins on the adapter are specified when the instance is created.
###Code
import math
from pynq.lib.pmod import Grove_TMP
from pynq.lib.pmod import PMOD_GROVE_G4 # import constants
# Grove2pmod is connected to PMODB (2)
# Grove ADC is connected to G4 (pins [6,2])
tmp = Grove_TMP(base.PMODB, PMOD_GROVE_G4)
###Output
_____no_output_____
###Markdown
Read from the sensorInternally, the Grove ADC provides a raw sample which is the resistance of the sensor. In the IOP, this value is converted into a temperature value.
###Code
temperature = tmp.read()
print("{0:.2f} degree Celsius".format(temperature))
###Output
_____no_output_____
###Markdown
You can run the cell above a number of times. Start logging once every 100ms for 10 secondsExecuting the next cell will start logging the temperature sensor values every 100ms, and will run for 10s. You can try touch/hold the temperature sensor to vary the measured temperature.You can vary the logging interval and the duration by changing the values in the cell below. The raw samples are stored in the internal memory, and converted into temperature values.
###Code
import time
ms_delay = 100
delay_s = 10
tmp.set_log_interval_ms(ms_delay)
tmp.start_log()
time.sleep(delay_s) # Change input during this time
tmp_log = tmp.get_log()
print("Logged {} samples".format(len(tmp_log)))
###Output
_____no_output_____
###Markdown
---- Display a graphUse matplotlib to display a graph of the temperature sensor data.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(range(len(tmp_log)), tmp_log, 'ro')
plt.title('Grove Temperature Plot')
plt.xlabel('Sample')
plt.ylabel('Temperature (Celsius)')
min_tmp_log = min(tmp_log)
max_tmp_log = max(tmp_log)
plt.axis([0, len(tmp_log), min_tmp_log, max_tmp_log])
plt.show()
###Output
_____no_output_____
###Markdown
Grove Temperature Sensor example----* [Introduction](Introduction)* [Setup the board](Setup-the-board)* [Setup the sensor](Setup-the-sensor)* [Read from the sensor](Read-from-the-sensor)* [Display a graph](Display-a-graph)---- IntroductionThe PYNQ-Z1 has two peripheral interfaces, an Arduino interface, and two Pmod ports. A number of Pmod and Grove Peripherals are supported on the PYNQ-Z1.Pmods can be plugged directly into the Pmod port. Grove Peripherals can be connected to the Pmod Port through a Pynq Grove Adapter board.As the Pmod interfaces, and Arduino interface are simply connected to FPGA pins, with all interface logic provided in an overlay, other peripherals can be connected with wires to the ports. (This assumes a custom overlay will be used.)This notebook will show how to use the [Grove Temperature Sensor v1.2](http://www.seeedstudio.com/wiki/Grove_-_Temperature_Sensor_V1.2) on the Pynq-Z1 board. You will also see how to plot a graph using _matplotlib_, a Python package for 2D plots. The Grove Temperature sensor produces an analog signal, and requires an ADC. A Grove Temperature sensor, a Grove ADC, and a Pynq Grove Adapter Adapter are required for this notebook example (a Pynq Shield could also be used instead of the Pynq Grove Adapter).The Grove ADC is an example of an I2C peripheral.The driver running on the IOP suports reading a single value of temperature, or reading and logging of multiple values at regular intervals. ---- Setup the boardStart by loading the Base Overlay.
###Code
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
###Output
_____no_output_____
###Markdown
Setup the sensor1. Connect the ***pmod2grove*** to ***PMODB***. 2. Connect ***Grove ADC*** port ***J1*** (SCL, SDA, VCC, GND) to port ***G4*** of the Pynq Grove Adapter. 3. Connect the ***Grove TMP*** to port ***J2*** of the ***Grove ADC*** (GND, VCC, NC, SIG) Create an instance of the sensorThe sensor is connected to the ADC. You will create an instance of the temperature sensor. The Grove ADC is connected to the board through the Pynq Grove adapter. This can be connected to either of the Pmod ports. The Grove ADC is an IIC peripheral. IIC requires pull-up pins on the FPGA. In the base overlay, these pins are only available on ports G3 or G4 of the Pynq Grove adapter, so the ADC must be connected to one of these ports. The Pmod port (PMODA, or PMODB), and the pins on the adapter are specified when the instance is created.
###Code
import math
from pynq.lib.pmod import Grove_TMP
from pynq.lib.pmod import PMOD_GROVE_G4 # import constants
# Grove2pmod is connected to PMODB (2)
# Grove ADC is connected to G4 (pins [6,2])
tmp = Grove_TMP(base.PMODB, PMOD_GROVE_G4)
###Output
_____no_output_____
###Markdown
Read from the sensorInternally, the Grove ADC provides a raw sample which is the resistance of the sensor. In the IOP, this value is converted into a temperature value.
###Code
temperature = tmp.read()
print(float("{0:.2f}".format(temperature)),'degree Celsius')
###Output
_____no_output_____
###Markdown
You can run the cell above a number of times. Start logging once every 100ms for 10 secondsExecuting the next cell will start logging the temperature sensor values every 100ms, and will run for 10s. You can try touch/hold the temperature sensor to vary the measured temperature.You can vary the logging interval and the duration by changing the values in the cell below. The raw samples are stored in the internal memory, and converted into temperature values.
###Code
import time
ms_delay = 100
delay_s = 10
tmp.set_log_interval_ms(ms_delay)
tmp.start_log()
time.sleep(delay_s) # Change input during this time
tmp_log = tmp.get_log()
###Output
_____no_output_____
###Markdown
---- Display a graphUse matplotlib to display a graph of the temperature sensor data.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(range(len(tmp_log)), tmp_log, 'ro')
plt.title('Grove Temperature Plot')
min_tmp_log = min(tmp_log)
max_tmp_log = max(tmp_log)
plt.axis([0, len(tmp_log), min_tmp_log, max_tmp_log])
plt.show()
###Output
_____no_output_____ |
Movie_Recommender_System_Using_KMeans_And_Cosine_Similarity.ipynb | ###Markdown
Data Preparation
###Code
def freq_words(x, terms = 30):
all_words = ' '.join([text for text in x])
all_words = all_words.split()
fdist = nltk.FreqDist(all_words)
words_df = pd.DataFrame({'word':list(fdist.keys()), 'count':list(fdist.values())})
# selecting top 20 most frequent words
d = words_df.nlargest(columns="count", n = terms)
plt.figure(figsize=(12,15))
ax = sns.barplot(data=d, x= "count", y = "word")
ax.set(ylabel = 'Word')
plt.show()
#cleaning book data from stop words and punctuation
def wordPreparation(txt, flg_stemm=False, flg_lemm=True):
tokenized_word = nltk.word_tokenize(txt)
tokenized_word = nltk.RegexpTokenizer('\w+').tokenize(txt)
stop_words=set(stopwords.words("english"))
## Removing stop words and punctuation
filtered_words=[x.lower() for x in tokenized_word if x.lower() not in stop_words and x.isalnum() ]
## Stemming (remove -ing, -ly, ...)
if flg_stemm == True:
ps = nltk.stem.porter.PorterStemmer()
lst_text = [ps.stem(word) for word in filtered_words]
## Lemmatisation (convert the word into root word)
if flg_lemm == True:
lem = nltk.stem.wordnet.WordNetLemmatizer()
lst_text = [lem.lemmatize(word) for word in filtered_words]
filtered_words = " ".join(filtered_words)
return filtered_words
df = pd.read_csv('IMDB_Top250Engmovies2_OMDB_Detailed.csv')
df = df.sample(frac=1).reset_index(drop=True)
# to combine 4 lists (4 columns) of key words into 1 sentence under Bag_of_words column
df['details'] = df['Genre']+' '+df['Director']+' '+df['Actors']+' '+df['Plot']
final_data = pd.DataFrame({'label':range(0,250),'title':df['Title'],'details':df['details']})
final_data['details'] = final_data['details'].apply(wordPreparation)
X = final_data['details']
y = final_data['title']
final_data
def MapTrueClusterClass(y_true_, y_pred_):
clus_true_class = []
clus_true_lab = dict()
for pred in set(y_pred_):
cluster_members = y_true_[pred == y_pred_]
clus, clus_counts = np.unique(cluster_members, return_counts = True)
dom_class = clus[np.argmax(clus_counts)]
clus_true_lab[pred] = dom_class
clus_true_class.append(dom_class)
# End For
return clus_true_class, clus_true_lab
# End Func
temp_gen = [genre.split(',') for genre in df['Genre']]
genres = np.array([genre.split(',')[0] for genre in df['Genre']])
final_data['MGenre'] = genres
final_data['Genres'] = temp_gen
for val in ['Mystery', 'Film-Noir', 'Sci-Fi']:
genres[genres == val] = 'Horror'
# End For
classes, counts = np.unique(genres, return_counts=True)
classes_inds = { i: classes[i] for i in range(len(classes)) }
y_clusters = np.array([[k for k, v in classes_inds.items() if v == val][0] for val in genres])
plt.figure(figsize=(7, 7))
plt.barh(classes, counts)
plt.xlabel('Counts')
plt.ylabel('Classes')
plt.show()
tfidf_vec = TfidfVectorizer()
X_tfidf = tfidf_vec.fit_transform(X)
wcss = []
sillou = []
ks = []
for i in range(1, 10):
k = i+1
model_ = KMeans(n_clusters=k, init='k-means++', max_iter=100, n_init=1)
y_pred = model_.fit(X_tfidf).predict(X_tfidf)
cluster_labels, _ = MapTrueClusterClass(y_clusters, y_pred)
true_cluster_labels = [cluster_labels[pred] for pred in y_pred]
sillou.append(silhouette_score(np.array(true_cluster_labels).reshape(-1, 1), np.array(y_clusters).reshape(-1, 1)))
ks.append(k)
wcss.append(model_.inertia_)
# End of For
plt.plot(ks, wcss)
plt.title('Choose the best K based on Elbow')
plt.xlabel('K')
plt.ylabel('WCSS')
plt.show()
true_k = 6
model = KMeans(n_clusters=true_k, init='k-means++', max_iter=100, n_init=1)
y_pred = model.fit(X_tfidf).predict(X_tfidf)
cluster_labels, clusters_dom_class = MapTrueClusterClass(y_clusters, y_pred)
true_cluster_labels = [cluster_labels[pred] for pred in y_pred]
print('wcss: ', model.inertia_)
print('silhouette: ', silhouette_score(np.array(true_cluster_labels).reshape(-1, 1), np.array(y_clusters).reshape(-1, 1)))
print('kappa: ', cohen_kappa_score(np.array(true_cluster_labels).reshape(-1, 1), np.array(y_clusters).reshape(-1, 1)))
# Top 10 frequent word in each cluster
def plt_top(clus_texts, title = '', num = 10):
clus_texts = ' '.join(clus_texts)
plt.title(title)
nltk.FreqDist(nltk.word_tokenize(clus_texts)).plot(num)
plt.show()
# End of Func
def AnalyseText(txt_):
print(f'text: {txt_}')
nltk.FreqDist(nltk.word_tokenize(txt_)).plot()
plt.show()
# End of Func
for clus_num in range(true_k):
plt_top(X[y_pred == clus_num], title = f'Cluster {clus_num} With Class {classes_inds[clusters_dom_class[clus_num]]} Top {10} Frequent Words', num=10)
# End of For
tsne = TSNE(n_components=2,random_state=100)
data2D = tsne.fit_transform(X_tfidf)
fig, ax = plt.subplots()
for cls in classes:
ax.scatter(data2D[cls == genres, 0], data2D[cls == genres, 1],s=30 ,label=cls)
# End Of For
ax.legend()
ax.grid(True)
fig, ax = plt.subplots()
for pred in range(0, true_k):
ax.scatter(data2D[pred == y_pred, 0], data2D[pred == y_pred, 1],s=30 ,label=f'Cluster_{pred}')
# End Of For
ax.legend()
ax.grid(True)
fig, ax = plt.subplots()
for lbl in set(true_cluster_labels):
inds = [i for i in range(len(cluster_labels)) if cluster_labels[i] == lbl]
ax.scatter(data2D[lbl == true_cluster_labels, 0], data2D[lbl == true_cluster_labels, 1],s=30 ,label=f'{classes_inds[lbl]}: Clusters_{inds}')
# End Of For
ax.legend()
ax.grid(True)
# wordPreparation()
trans = tfidf_vec.transform(['I want a shooting and action film events'])
pred_ = model.predict(trans)
cosine_similarities = linear_kernel(trans[0:1], X_tfidf[pred_ == y_pred]).flatten()
final_df = pd.DataFrame()
final_df['film'] = y[pred_ == y_pred]
final_df['score'] = cosine_similarities
final_df = final_df.sort_values(by = ['score'], ascending=False)
print(f"The predicted cluster is {pred_[0]} which is type {classes_inds[cluster_labels[pred_[0]]]}")
final_df.head(10).reset_index().drop('index', axis=1)
recommended_films = final_df['film'].head(5).values
recommended_films
final_df['film'].head(5).index
###Output
_____no_output_____
###Markdown
Error Analysis
###Code
recommended_discriptions = final_df['film'].head(5).index
final_data.iloc[recommended_discriptions, :].drop('label', axis=1)
final_data.iloc[recommended_discriptions, :]['details'].apply(AnalyseText)
###Output
text: drama film noir billy wilder william holden gloria swanson erich von stroheim nancy olson screenwriter hired rework faded silent film star script find developing dangerous relationship
|
FinRL_portfolio_allocation_NeurIPS_2020.ipynb | ###Markdown
Deep Reinforcement Learning for Stock Trading from Scratch: Portfolio AllocationTutorials to use OpenAI DRL to perform portfolio allocation in one Jupyter Notebook | Presented at NeurIPS 2020: Deep RL Workshop* This blog is based on our paper: FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance, presented at NeurIPS 2020: Deep RL Workshop.* Check out medium blog for detailed explanations: * Please report any issues to our Github: https://github.com/AI4Finance-LLC/FinRL-Library/issues* **Pytorch Version** Content * [1. Problem Definition](0)* [2. Getting Started - Load Python packages](1) * [2.1. Install Packages](1.1) * [2.2. Check Additional Packages](1.2) * [2.3. Import Packages](1.3) * [2.4. Create Folders](1.4)* [3. Download Data](2)* [4. Preprocess Data](3) * [4.1. Technical Indicators](3.1) * [4.2. Perform Feature Engineering](3.2)* [5.Build Environment](4) * [5.1. Training & Trade Data Split](4.1) * [5.2. User-defined Environment](4.2) * [5.3. Initialize Environment](4.3) * [6.Implement DRL Algorithms](5) * [7.Backtesting Performance](6) * [7.1. BackTestStats](6.1) * [7.2. BackTestPlot](6.2) * [7.3. Baseline Stats](6.3) * [7.3. Compare to Stock Market Index](6.4) Part 1. Problem Definition This problem is to design an automated trading solution for single stock trading. We model the stock trading process as a Markov Decision Process (MDP). We then formulate our trading goal as a maximization problem.The algorithm is trained using Deep Reinforcement Learning (DRL) algorithms and the components of the reinforcement learning environment are:* Action: The action space describes the allowed actions that the agent interacts with theenvironment. Normally, a ∈ A includes three actions: a ∈ {−1, 0, 1}, where −1, 0, 1 representselling, holding, and buying one stock. Also, an action can be carried upon multiple shares. We usean action space {−k, ..., −1, 0, 1, ..., k}, where k denotes the number of shares. For example, "Buy10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or −10, respectively* Reward function: r(s, a, s′) is the incentive mechanism for an agent to learn a better action. The change of the portfolio value when action a is taken at state s and arriving at new state s', i.e., r(s, a, s′) = v′ − v, where v′ and v represent the portfoliovalues at state s′ and s, respectively* State: The state space describes the observations that the agent receives from the environment. Just as a human trader needs to analyze various information before executing a trade, soour trading agent observes many different features to better learn in an interactive environment.* Environment: Dow 30 consituentsThe data of the single stock that we will be using for this case study is obtained from Yahoo Finance API. The data contains Open-High-Low-Close price and volume. Part 2. Getting Started- Load Python Packages 2.1. Install all the packages through FinRL library
###Code
## install finrl library
!pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
###Output
Collecting git+https://github.com/AI4Finance-LLC/FinRL-Library.git
Cloning https://github.com/AI4Finance-LLC/FinRL-Library.git to /tmp/pip-req-build-bd7z679f
Running command git clone -q https://github.com/AI4Finance-LLC/FinRL-Library.git /tmp/pip-req-build-bd7z679f
Collecting pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2
Cloning https://github.com/quantopian/pyfolio.git to /tmp/pip-install-szxrp4sk/pyfolio_d7f586c8021647e0bb2f92448edfcce8
Running command git clone -q https://github.com/quantopian/pyfolio.git /tmp/pip-install-szxrp4sk/pyfolio_d7f586c8021647e0bb2f92448edfcce8
Requirement already satisfied: numpy>=1.17.3 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (1.19.5)
Requirement already satisfied: pandas>=1.1.5 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (1.1.5)
Requirement already satisfied: stockstats in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (0.3.2)
Requirement already satisfied: yfinance in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (0.1.63)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (3.2.2)
Requirement already satisfied: scikit-learn>=0.21.0 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (0.22.2.post1)
Requirement already satisfied: gym>=0.17 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (0.17.3)
Requirement already satisfied: stable-baselines3[extra] in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (1.1.0)
Requirement already satisfied: pytest in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (3.6.4)
Requirement already satisfied: setuptools>=41.4.0 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (57.2.0)
Requirement already satisfied: wheel>=0.33.6 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (0.36.2)
Requirement already satisfied: ipython>=3.2.3 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (5.5.0)
Requirement already satisfied: pytz>=2014.10 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (2018.9)
Requirement already satisfied: scipy>=0.14.0 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (1.4.1)
Requirement already satisfied: seaborn>=0.7.1 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.11.1)
Requirement already satisfied: empyrical>=0.5.0 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.5.5)
Requirement already satisfied: pandas-datareader>=0.2 in /usr/local/lib/python3.7/dist-packages (from empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.9.0)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym>=0.17->finrl==0.3.0) (1.3.0)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym>=0.17->finrl==0.3.0) (1.5.0)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.8.1)
Requirement already satisfied: pexpect in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (4.8.0)
Requirement already satisfied: pygments in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (2.6.1)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.7.5)
Requirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (4.4.2)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (5.0.5)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (1.0.18)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.3.0) (1.3.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.3.0) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.3.0) (2.4.7)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.3.0) (2.8.1)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from cycler>=0.10->matplotlib->finrl==0.3.0) (1.15.0)
Requirement already satisfied: lxml in /usr/local/lib/python3.7/dist-packages (from pandas-datareader>=0.2->empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (4.6.3)
Requirement already satisfied: requests>=2.19.0 in /usr/local/lib/python3.7/dist-packages (from pandas-datareader>=0.2->empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (2.23.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.2.5)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym>=0.17->finrl==0.3.0) (0.16.0)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->pandas-datareader>=0.2->empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->pandas-datareader>=0.2->empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->pandas-datareader>=0.2->empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->pandas-datareader>=0.2->empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (2021.5.30)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.0->finrl==0.3.0) (1.0.1)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.7/dist-packages (from traitlets>=4.2->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.2.0)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.7/dist-packages (from pexpect->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.7.0)
Requirement already satisfied: pluggy<0.8,>=0.5 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.0) (0.7.1)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.0) (21.2.0)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.0) (8.8.0)
Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.0) (1.10.0)
Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.0) (1.4.0)
Requirement already satisfied: torch>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.0) (1.9.0+cu102)
Requirement already satisfied: atari-py~=0.2.0 in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.0) (0.2.9)
Requirement already satisfied: tensorboard>=2.2.0 in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.0) (2.5.0)
Requirement already satisfied: opencv-python in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.0) (4.1.2.30)
Requirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.0) (7.1.2)
Requirement already satisfied: psutil in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.0) (5.4.8)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.0) (0.12.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.0) (1.32.1)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.0) (1.34.1)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.0) (3.17.3)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.0) (0.4.4)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.0) (0.6.1)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.0) (1.0.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.0) (3.3.4)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.0) (1.8.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.0) (0.2.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.0) (4.2.2)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.0) (4.7.2)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.0) (1.3.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.0) (4.6.1)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.0) (0.4.8)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.0) (3.1.1)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch>=1.4.0->stable-baselines3[extra]->finrl==0.3.0) (3.7.4.3)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->markdown>=2.6.8->tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.0) (3.5.0)
Requirement already satisfied: int-date>=0.1.7 in /usr/local/lib/python3.7/dist-packages (from stockstats->finrl==0.3.0) (0.1.8)
Requirement already satisfied: multitasking>=0.0.7 in /usr/local/lib/python3.7/dist-packages (from yfinance->finrl==0.3.0) (0.0.9)
###Markdown
2.2. Check if the additional packages needed are present, if not install them. * Yahoo Finance API* pandas* numpy* matplotlib* stockstats* OpenAI gym* stable-baselines* tensorflow* pyfolio 2.3. Import Packages
###Code
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.use('Agg')
%matplotlib inline
import datetime
from finrl.apps import config
from finrl.neo_finrl.preprocessor.yahoodownloader import YahooDownloader
from finrl.neo_finrl.preprocessor.preprocessors import FeatureEngineer, data_split
from finrl.neo_finrl.env_portfolio_allocation.env_portfolio import StockPortfolioEnv
from finrl.drl_agents.stablebaselines3.models import DRLAgent
from finrl.plot import backtest_stats, backtest_plot, get_daily_return, get_baseline,convert_daily_return_to_pyfolio_ts
import sys
sys.path.append("../FinRL-Library")
###Output
/usr/local/lib/python3.7/dist-packages/pyfolio/pos.py:27: UserWarning: Module "zipline.assets" not found; multipliers will not be applied to position notionals.
'Module "zipline.assets" not found; multipliers will not be applied'
###Markdown
2.4. Create Folders
###Code
import os
if not os.path.exists("./" + config.DATA_SAVE_DIR):
os.makedirs("./" + config.DATA_SAVE_DIR)
if not os.path.exists("./" + config.TRAINED_MODEL_DIR):
os.makedirs("./" + config.TRAINED_MODEL_DIR)
if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR):
os.makedirs("./" + config.TENSORBOARD_LOG_DIR)
if not os.path.exists("./" + config.RESULTS_DIR):
os.makedirs("./" + config.RESULTS_DIR)
###Output
_____no_output_____
###Markdown
Part 3. Download DataYahoo Finance is a website that provides stock data, financial news, financial reports, etc. All the data provided by Yahoo Finance is free.* FinRL uses a class **YahooDownloader** to fetch data from Yahoo Finance API* Call Limit: Using the Public API (without authentication), you are limited to 2,000 requests per hour per IP (or up to a total of 48,000 requests a day).
###Code
print(config.DOW_30_TICKER)
df = YahooDownloader(start_date = '2008-01-01',
end_date = '2021-07-01',
ticker_list = config.DOW_30_TICKER).fetch_data()
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Part 4: Preprocess DataData preprocessing is a crucial step for training a high quality machine learning model. We need to check for missing data and do feature engineering in order to convert the data into a model-ready state.* Add technical indicators. In practical trading, various information needs to be taken into account, for example the historical stock prices, current holding shares, technical indicators, etc. In this article, we demonstrate two trend-following technical indicators: MACD and RSI.* Add turbulence index. Risk-aversion reflects whether an investor will choose to preserve the capital. It also influences one's trading strategy when facing different market volatility level. To control the risk in a worst-case scenario, such as financial crisis of 2007–2008, FinRL employs the financial turbulence index that measures extreme asset price fluctuation.
###Code
fe = FeatureEngineer(
use_technical_indicator=True,
use_turbulence=False,
user_defined_feature = False)
df = fe.preprocess_data(df)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Add covariance matrix as states
###Code
# add covariance matrix as states
df=df.sort_values(['date','tic'],ignore_index=True)
df.index = df.date.factorize()[0]
cov_list = []
return_list = []
# look back is one year
lookback=252
for i in range(lookback,len(df.index.unique())):
data_lookback = df.loc[i-lookback:i,:]
price_lookback=data_lookback.pivot_table(index = 'date',columns = 'tic', values = 'close')
return_lookback = price_lookback.pct_change().dropna()
return_list.append(return_lookback)
covs = return_lookback.cov().values
cov_list.append(covs)
df_cov = pd.DataFrame({'date':df.date.unique()[lookback:],'cov_list':cov_list,'return_list':return_list})
df = df.merge(df_cov, on='date')
df = df.sort_values(['date','tic']).reset_index(drop=True)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Part 5. Design EnvironmentConsidering the stochastic and interactive nature of the automated stock trading tasks, a financial task is modeled as a **Markov Decision Process (MDP)** problem. The training process involves observing stock price change, taking an action and reward's calculation to have the agent adjusting its strategy accordingly. By interacting with the environment, the trading agent will derive a trading strategy with the maximized rewards as time proceeds.Our trading environments, based on OpenAI Gym framework, simulate live stock markets with real market data according to the principle of time-driven simulation.The action space describes the allowed actions that the agent interacts with the environment. Normally, action a includes three actions: {-1, 0, 1}, where -1, 0, 1 represent selling, holding, and buying one share. Also, an action can be carried upon multiple shares. We use an action space {-k,…,-1, 0, 1, …, k}, where k denotes the number of shares to buy and -k denotes the number of shares to sell. For example, "Buy 10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or -10, respectively. The continuous action space needs to be normalized to [-1, 1], since the policy is defined on a Gaussian distribution, which needs to be normalized and symmetric. Training data split: 2009-01-01 to 2018-12-31
###Code
train = data_split(df, '2009-01-01','2020-07-01')
#trade = data_split(df, '2020-01-01', config.END_DATE)
train.head()
###Output
_____no_output_____
###Markdown
Environment for Portfolio Allocation
###Code
import numpy as np
import pandas as pd
from gym.utils import seeding
import gym
from gym import spaces
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from stable_baselines3.common.vec_env import DummyVecEnv
class StockPortfolioEnv(gym.Env):
"""A single stock trading environment for OpenAI gym
Attributes
----------
df: DataFrame
input data
stock_dim : int
number of unique stocks
hmax : int
maximum number of shares to trade
initial_amount : int
start money
transaction_cost_pct: float
transaction cost percentage per trade
reward_scaling: float
scaling factor for reward, good for training
state_space: int
the dimension of input features
action_space: int
equals stock dimension
tech_indicator_list: list
a list of technical indicator names
turbulence_threshold: int
a threshold to control risk aversion
day: int
an increment number to control date
Methods
-------
_sell_stock()
perform sell action based on the sign of the action
_buy_stock()
perform buy action based on the sign of the action
step()
at each step the agent will return actions, then
we will calculate the reward, and return the next observation.
reset()
reset the environment
render()
use render to return other functions
save_asset_memory()
return account value at each time step
save_action_memory()
return actions/positions at each time step
"""
metadata = {'render.modes': ['human']}
def __init__(self,
df,
stock_dim,
hmax,
initial_amount,
transaction_cost_pct,
reward_scaling,
state_space,
action_space,
tech_indicator_list,
turbulence_threshold=None,
lookback=252,
day = 0):
#super(StockEnv, self).__init__()
#money = 10 , scope = 1
self.day = day
self.lookback=lookback
self.df = df
self.stock_dim = stock_dim
self.hmax = hmax
self.initial_amount = initial_amount
self.transaction_cost_pct =transaction_cost_pct
self.reward_scaling = reward_scaling
self.state_space = state_space
self.action_space = action_space
self.tech_indicator_list = tech_indicator_list
# action_space normalization and shape is self.stock_dim
self.action_space = spaces.Box(low = 0, high = 1,shape = (self.action_space,))
# Shape = (34, 30)
# covariance matrix + technical indicators
self.observation_space = spaces.Box(low=-np.inf, high=np.inf, shape = (self.state_space+len(self.tech_indicator_list),self.state_space))
# load data from a pandas dataframe
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.terminal = False
self.turbulence_threshold = turbulence_threshold
# initalize state: inital portfolio return + individual stock return + individual weights
self.portfolio_value = self.initial_amount
# memorize portfolio value each step
self.asset_memory = [self.initial_amount]
# memorize portfolio return each step
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
def step(self, actions):
# print(self.day)
self.terminal = self.day >= len(self.df.index.unique())-1
# print(actions)
if self.terminal:
df = pd.DataFrame(self.portfolio_return_memory)
df.columns = ['daily_return']
plt.plot(df.daily_return.cumsum(),'r')
plt.savefig('results/cumulative_reward.png')
plt.close()
plt.plot(self.portfolio_return_memory,'r')
plt.savefig('results/rewards.png')
plt.close()
print("=================================")
print("begin_total_asset:{}".format(self.asset_memory[0]))
print("end_total_asset:{}".format(self.portfolio_value))
df_daily_return = pd.DataFrame(self.portfolio_return_memory)
df_daily_return.columns = ['daily_return']
if df_daily_return['daily_return'].std() !=0:
sharpe = (252**0.5)*df_daily_return['daily_return'].mean()/ \
df_daily_return['daily_return'].std()
print("Sharpe: ",sharpe)
print("=================================")
return self.state, self.reward, self.terminal,{}
else:
#print("Model actions: ",actions)
# actions are the portfolio weight
# normalize to sum of 1
#if (np.array(actions) - np.array(actions).min()).sum() != 0:
# norm_actions = (np.array(actions) - np.array(actions).min()) / (np.array(actions) - np.array(actions).min()).sum()
#else:
# norm_actions = actions
weights = self.softmax_normalization(actions)
#print("Normalized actions: ", weights)
self.actions_memory.append(weights)
last_day_memory = self.data
#load next state
self.day += 1
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
#print(self.state)
# calcualte portfolio return
# individual stocks' return * weight
portfolio_return = sum(((self.data.close.values / last_day_memory.close.values)-1)*weights)
# update portfolio value
new_portfolio_value = self.portfolio_value*(1+portfolio_return)
self.portfolio_value = new_portfolio_value
# save into memory
self.portfolio_return_memory.append(portfolio_return)
self.date_memory.append(self.data.date.unique()[0])
self.asset_memory.append(new_portfolio_value)
# the reward is the new portfolio value or end portfolo value
self.reward = new_portfolio_value
#print("Step reward: ", self.reward)
#self.reward = self.reward*self.reward_scaling
return self.state, self.reward, self.terminal, {}
def reset(self):
self.asset_memory = [self.initial_amount]
self.day = 0
self.data = self.df.loc[self.day,:]
# load states
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.portfolio_value = self.initial_amount
#self.cost = 0
#self.trades = 0
self.terminal = False
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
return self.state
def render(self, mode='human'):
return self.state
def softmax_normalization(self, actions):
numerator = np.exp(actions)
denominator = np.sum(np.exp(actions))
softmax_output = numerator/denominator
return softmax_output
def save_asset_memory(self):
date_list = self.date_memory
portfolio_return = self.portfolio_return_memory
#print(len(date_list))
#print(len(asset_list))
df_account_value = pd.DataFrame({'date':date_list,'daily_return':portfolio_return})
return df_account_value
def save_action_memory(self):
# date and close price length must match actions length
date_list = self.date_memory
df_date = pd.DataFrame(date_list)
df_date.columns = ['date']
action_list = self.actions_memory
df_actions = pd.DataFrame(action_list)
df_actions.columns = self.data.tic.values
df_actions.index = df_date.date
#df_actions = pd.DataFrame({'date':date_list,'actions':action_list})
return df_actions
def _seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def get_sb_env(self):
e = DummyVecEnv([lambda: self])
obs = e.reset()
return e, obs
stock_dimension = len(train.tic.unique())
state_space = stock_dimension
print(f"Stock Dimension: {stock_dimension}, State Space: {state_space}")
env_kwargs = {
"hmax": 100,
"initial_amount": 1000000,
"transaction_cost_pct": 0.001,
"state_space": state_space,
"stock_dim": stock_dimension,
"tech_indicator_list": config.TECHNICAL_INDICATORS_LIST,
"action_space": stock_dimension,
"reward_scaling": 1e-4
}
e_train_gym = StockPortfolioEnv(df = train, **env_kwargs)
env_train, _ = e_train_gym.get_sb_env()
print(type(env_train))
###Output
<class 'stable_baselines3.common.vec_env.dummy_vec_env.DummyVecEnv'>
###Markdown
Part 6: Implement DRL Algorithms* The implementation of the DRL algorithms are based on **OpenAI Baselines** and **Stable Baselines**. Stable Baselines is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups.* FinRL library includes fine-tuned standard DRL algorithms, such as DQN, DDPG,Multi-Agent DDPG, PPO, SAC, A2C and TD3. We also allow users todesign their own DRL algorithms by adapting these DRL algorithms.
###Code
# initialize
agent = DRLAgent(env = env_train)
###Output
_____no_output_____
###Markdown
Model 1: **A2C**
###Code
agent = DRLAgent(env = env_train)
A2C_PARAMS = {"n_steps": 5, "ent_coef": 0.005, "learning_rate": 0.0002}
model_a2c = agent.get_model(model_name="a2c",model_kwargs = A2C_PARAMS)
trained_a2c = agent.train_model(model=model_a2c,
tb_log_name='a2c',
total_timesteps=50000)
###Output
Logging to tensorboard_log/a2c/a2c_1
------------------------------------
| time/ | |
| fps | 150 |
| iterations | 100 |
| time_elapsed | 3 |
| total_timesteps | 500 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 99 |
| policy_loss | 1.93e+08 |
| std | 0.998 |
| value_loss | 2.72e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 204 |
| iterations | 200 |
| time_elapsed | 4 |
| total_timesteps | 1000 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 199 |
| policy_loss | 2.29e+08 |
| std | 0.998 |
| value_loss | 4.52e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 230 |
| iterations | 300 |
| time_elapsed | 6 |
| total_timesteps | 1500 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 299 |
| policy_loss | 4.04e+08 |
| std | 0.998 |
| value_loss | 1.05e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 247 |
| iterations | 400 |
| time_elapsed | 8 |
| total_timesteps | 2000 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 399 |
| policy_loss | 4.23e+08 |
| std | 0.997 |
| value_loss | 1.31e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 256 |
| iterations | 500 |
| time_elapsed | 9 |
| total_timesteps | 2500 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 499 |
| policy_loss | 5.81e+08 |
| std | 0.997 |
| value_loss | 2.44e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4947619.780454191
Sharpe: 0.8521509187283361
=================================
-------------------------------------
| time/ | |
| fps | 258 |
| iterations | 600 |
| time_elapsed | 11 |
| total_timesteps | 3000 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 599 |
| policy_loss | 1.36e+08 |
| std | 0.997 |
| value_loss | 1.21e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 265 |
| iterations | 700 |
| time_elapsed | 13 |
| total_timesteps | 3500 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 699 |
| policy_loss | 2.08e+08 |
| std | 0.996 |
| value_loss | 3.33e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 269 |
| iterations | 800 |
| time_elapsed | 14 |
| total_timesteps | 4000 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 799 |
| policy_loss | 3.02e+08 |
| std | 0.996 |
| value_loss | 6.32e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 272 |
| iterations | 900 |
| time_elapsed | 16 |
| total_timesteps | 4500 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 899 |
| policy_loss | 4.09e+08 |
| std | 0.996 |
| value_loss | 1.14e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 275 |
| iterations | 1000 |
| time_elapsed | 18 |
| total_timesteps | 5000 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 999 |
| policy_loss | 5.29e+08 |
| std | 0.996 |
| value_loss | 1.62e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 278 |
| iterations | 1100 |
| time_elapsed | 19 |
| total_timesteps | 5500 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 1099 |
| policy_loss | 6.81e+08 |
| std | 0.995 |
| value_loss | 2.91e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5256363.347985752
Sharpe: 0.8827324589483913
=================================
------------------------------------
| time/ | |
| fps | 278 |
| iterations | 1200 |
| time_elapsed | 21 |
| total_timesteps | 6000 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 1199 |
| policy_loss | 1.53e+08 |
| std | 0.995 |
| value_loss | 1.8e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 279 |
| iterations | 1300 |
| time_elapsed | 23 |
| total_timesteps | 6500 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 1.79e-07 |
| learning_rate | 0.0002 |
| n_updates | 1299 |
| policy_loss | 1.96e+08 |
| std | 0.995 |
| value_loss | 2.98e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 280 |
| iterations | 1400 |
| time_elapsed | 24 |
| total_timesteps | 7000 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 1399 |
| policy_loss | 3.12e+08 |
| std | 0.995 |
| value_loss | 6.57e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 282 |
| iterations | 1500 |
| time_elapsed | 26 |
| total_timesteps | 7500 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 1499 |
| policy_loss | 3.61e+08 |
| std | 0.996 |
| value_loss | 8.94e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 284 |
| iterations | 1600 |
| time_elapsed | 28 |
| total_timesteps | 8000 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 1599 |
| policy_loss | 4.43e+08 |
| std | 0.996 |
| value_loss | 1.67e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 285 |
| iterations | 1700 |
| time_elapsed | 29 |
| total_timesteps | 8500 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 1699 |
| policy_loss | 5.89e+08 |
| std | 0.995 |
| value_loss | 2.44e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4924025.62195718
Sharpe: 0.8441786909726531
=================================
-------------------------------------
| time/ | |
| fps | 284 |
| iterations | 1800 |
| time_elapsed | 31 |
| total_timesteps | 9000 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 1799 |
| policy_loss | 1.68e+08 |
| std | 0.994 |
| value_loss | 2.13e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 286 |
| iterations | 1900 |
| time_elapsed | 33 |
| total_timesteps | 9500 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 1899 |
| policy_loss | 2.29e+08 |
| std | 0.994 |
| value_loss | 3.9e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 287 |
| iterations | 2000 |
| time_elapsed | 34 |
| total_timesteps | 10000 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 1999 |
| policy_loss | 3.15e+08 |
| std | 0.993 |
| value_loss | 7.68e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 288 |
| iterations | 2100 |
| time_elapsed | 36 |
| total_timesteps | 10500 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 2099 |
| policy_loss | 4.01e+08 |
| std | 0.993 |
| value_loss | 1.09e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 289 |
| iterations | 2200 |
| time_elapsed | 38 |
| total_timesteps | 11000 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 2199 |
| policy_loss | 5.5e+08 |
| std | 0.993 |
| value_loss | 2.24e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 290 |
| iterations | 2300 |
| time_elapsed | 39 |
| total_timesteps | 11500 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 2299 |
| policy_loss | 5.33e+08 |
| std | 0.993 |
| value_loss | 2.14e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4982489.678803913
Sharpe: 0.8552046817106821
=================================
------------------------------------
| time/ | |
| fps | 289 |
| iterations | 2400 |
| time_elapsed | 41 |
| total_timesteps | 12000 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 2399 |
| policy_loss | 1.64e+08 |
| std | 0.993 |
| value_loss | 2.11e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 290 |
| iterations | 2500 |
| time_elapsed | 42 |
| total_timesteps | 12500 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 2499 |
| policy_loss | 2.15e+08 |
| std | 0.993 |
| value_loss | 3.99e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 291 |
| iterations | 2600 |
| time_elapsed | 44 |
| total_timesteps | 13000 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 2599 |
| policy_loss | 3.37e+08 |
| std | 0.992 |
| value_loss | 8.69e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 292 |
| iterations | 2700 |
| time_elapsed | 46 |
| total_timesteps | 13500 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 2699 |
| policy_loss | 3.97e+08 |
| std | 0.992 |
| value_loss | 1.16e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 293 |
| iterations | 2800 |
| time_elapsed | 47 |
| total_timesteps | 14000 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 2799 |
| policy_loss | 5.01e+08 |
| std | 0.992 |
| value_loss | 2.23e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4983386.35610803
Sharpe: 0.8547109683374965
=================================
-------------------------------------
| time/ | |
| fps | 292 |
| iterations | 2900 |
| time_elapsed | 49 |
| total_timesteps | 14500 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 2899 |
| policy_loss | 1.26e+08 |
| std | 0.992 |
| value_loss | 8.44e+12 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 293 |
| iterations | 3000 |
| time_elapsed | 51 |
| total_timesteps | 15000 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 2999 |
| policy_loss | 2.15e+08 |
| std | 0.991 |
| value_loss | 2.91e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 293 |
| iterations | 3100 |
| time_elapsed | 52 |
| total_timesteps | 15500 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 3099 |
| policy_loss | 2.35e+08 |
| std | 0.991 |
| value_loss | 4.57e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 294 |
| iterations | 3200 |
| time_elapsed | 54 |
| total_timesteps | 16000 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 3199 |
| policy_loss | 3.68e+08 |
| std | 0.99 |
| value_loss | 9.19e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 295 |
| iterations | 3300 |
| time_elapsed | 55 |
| total_timesteps | 16500 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 3299 |
| policy_loss | 4.27e+08 |
| std | 0.99 |
| value_loss | 1.28e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 295 |
| iterations | 3400 |
| time_elapsed | 57 |
| total_timesteps | 17000 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 3399 |
| policy_loss | 5.07e+08 |
| std | 0.989 |
| value_loss | 2.1e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4850893.279908747
Sharpe: 0.8379486094766025
=================================
------------------------------------
| time/ | |
| fps | 295 |
| iterations | 3500 |
| time_elapsed | 59 |
| total_timesteps | 17500 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 3499 |
| policy_loss | 1.27e+08 |
| std | 0.989 |
| value_loss | 1.26e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 295 |
| iterations | 3600 |
| time_elapsed | 60 |
| total_timesteps | 18000 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 3599 |
| policy_loss | 1.98e+08 |
| std | 0.988 |
| value_loss | 2.94e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 295 |
| iterations | 3700 |
| time_elapsed | 62 |
| total_timesteps | 18500 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 3699 |
| policy_loss | 2.48e+08 |
| std | 0.988 |
| value_loss | 5.2e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 296 |
| iterations | 3800 |
| time_elapsed | 64 |
| total_timesteps | 19000 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 3799 |
| policy_loss | 3.42e+08 |
| std | 0.988 |
| value_loss | 9.02e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 296 |
| iterations | 3900 |
| time_elapsed | 65 |
| total_timesteps | 19500 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 3899 |
| policy_loss | 4.34e+08 |
| std | 0.988 |
| value_loss | 1.39e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 296 |
| iterations | 4000 |
| time_elapsed | 67 |
| total_timesteps | 20000 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 3999 |
| policy_loss | 5.6e+08 |
| std | 0.988 |
| value_loss | 2.47e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4647489.129320578
Sharpe: 0.8222750760796416
=================================
------------------------------------
| time/ | |
| fps | 296 |
| iterations | 4100 |
| time_elapsed | 69 |
| total_timesteps | 20500 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 4099 |
| policy_loss | 1.73e+08 |
| std | 0.989 |
| value_loss | 2e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 296 |
| iterations | 4200 |
| time_elapsed | 70 |
| total_timesteps | 21000 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 4199 |
| policy_loss | 2.11e+08 |
| std | 0.988 |
| value_loss | 3.06e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 296 |
| iterations | 4300 |
| time_elapsed | 72 |
| total_timesteps | 21500 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 4299 |
| policy_loss | 2.94e+08 |
| std | 0.988 |
| value_loss | 6.91e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 297 |
| iterations | 4400 |
| time_elapsed | 73 |
| total_timesteps | 22000 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 4399 |
| policy_loss | 3.7e+08 |
| std | 0.988 |
| value_loss | 1.04e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 4500 |
| time_elapsed | 75 |
| total_timesteps | 22500 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 4499 |
| policy_loss | 5.48e+08 |
| std | 0.988 |
| value_loss | 2.05e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 4600 |
| time_elapsed | 77 |
| total_timesteps | 23000 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 4599 |
| policy_loss | 6.6e+08 |
| std | 0.987 |
| value_loss | 3.23e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5218141.028240858
Sharpe: 0.8745331898162475
=================================
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 4700 |
| time_elapsed | 79 |
| total_timesteps | 23500 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 4699 |
| policy_loss | 1.66e+08 |
| std | 0.986 |
| value_loss | 1.93e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 4800 |
| time_elapsed | 80 |
| total_timesteps | 24000 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 4799 |
| policy_loss | 2.26e+08 |
| std | 0.986 |
| value_loss | 3.74e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 4900 |
| time_elapsed | 82 |
| total_timesteps | 24500 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 4899 |
| policy_loss | 3.48e+08 |
| std | 0.986 |
| value_loss | 8.01e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 298 |
| iterations | 5000 |
| time_elapsed | 83 |
| total_timesteps | 25000 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 4999 |
| policy_loss | 3.53e+08 |
| std | 0.984 |
| value_loss | 1.07e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 298 |
| iterations | 5100 |
| time_elapsed | 85 |
| total_timesteps | 25500 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 5099 |
| policy_loss | 5.16e+08 |
| std | 0.984 |
| value_loss | 2.12e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 298 |
| iterations | 5200 |
| time_elapsed | 87 |
| total_timesteps | 26000 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 5199 |
| policy_loss | 6.13e+08 |
| std | 0.983 |
| value_loss | 2.74e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5383028.662466644
Sharpe: 0.8936439940751995
=================================
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 5300 |
| time_elapsed | 89 |
| total_timesteps | 26500 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 5299 |
| policy_loss | 1.85e+08 |
| std | 0.983 |
| value_loss | 2.22e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 5400 |
| time_elapsed | 90 |
| total_timesteps | 27000 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 5399 |
| policy_loss | 2.44e+08 |
| std | 0.982 |
| value_loss | 3.91e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 297 |
| iterations | 5500 |
| time_elapsed | 92 |
| total_timesteps | 27500 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 5499 |
| policy_loss | 3.2e+08 |
| std | 0.982 |
| value_loss | 7.6e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 5600 |
| time_elapsed | 94 |
| total_timesteps | 28000 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 5599 |
| policy_loss | 3.9e+08 |
| std | 0.981 |
| value_loss | 1.07e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 5700 |
| time_elapsed | 95 |
| total_timesteps | 28500 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 5699 |
| policy_loss | 5.31e+08 |
| std | 0.98 |
| value_loss | 2.36e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4938418.793976344
Sharpe: 0.8524635335321091
=================================
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 5800 |
| time_elapsed | 97 |
| total_timesteps | 29000 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 5799 |
| policy_loss | 1.15e+08 |
| std | 0.98 |
| value_loss | 9.32e+12 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 5900 |
| time_elapsed | 99 |
| total_timesteps | 29500 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 5899 |
| policy_loss | 1.98e+08 |
| std | 0.979 |
| value_loss | 3.11e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 6000 |
| time_elapsed | 100 |
| total_timesteps | 30000 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 5999 |
| policy_loss | 2.73e+08 |
| std | 0.979 |
| value_loss | 5.46e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 6100 |
| time_elapsed | 102 |
| total_timesteps | 30500 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 6099 |
| policy_loss | 3.64e+08 |
| std | 0.979 |
| value_loss | 1.05e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 6200 |
| time_elapsed | 104 |
| total_timesteps | 31000 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 6199 |
| policy_loss | 4.67e+08 |
| std | 0.979 |
| value_loss | 1.59e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 6300 |
| time_elapsed | 105 |
| total_timesteps | 31500 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 3.58e-07 |
| learning_rate | 0.0002 |
| n_updates | 6299 |
| policy_loss | 6.46e+08 |
| std | 0.979 |
| value_loss | 2.72e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5107682.330574452
Sharpe: 0.862566827885531
=================================
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 6400 |
| time_elapsed | 107 |
| total_timesteps | 32000 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 6399 |
| policy_loss | 1.45e+08 |
| std | 0.979 |
| value_loss | 1.56e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 297 |
| iterations | 6500 |
| time_elapsed | 109 |
| total_timesteps | 32500 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 6499 |
| policy_loss | 1.78e+08 |
| std | 0.979 |
| value_loss | 2.6e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 6600 |
| time_elapsed | 110 |
| total_timesteps | 33000 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 6599 |
| policy_loss | 2.68e+08 |
| std | 0.978 |
| value_loss | 5.4e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 297 |
| iterations | 6700 |
| time_elapsed | 112 |
| total_timesteps | 33500 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 6699 |
| policy_loss | 3.09e+08 |
| std | 0.977 |
| value_loss | 7.91e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 6800 |
| time_elapsed | 114 |
| total_timesteps | 34000 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 6799 |
| policy_loss | 4.48e+08 |
| std | 0.977 |
| value_loss | 1.61e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 297 |
| iterations | 6900 |
| time_elapsed | 115 |
| total_timesteps | 34500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | -3.58e-07 |
| learning_rate | 0.0002 |
| n_updates | 6899 |
| policy_loss | 5.77e+08 |
| std | 0.976 |
| value_loss | 2.48e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4922680.366924848
Sharpe: 0.8487477275124381
=================================
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 7000 |
| time_elapsed | 117 |
| total_timesteps | 35000 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 6999 |
| policy_loss | 1.61e+08 |
| std | 0.976 |
| value_loss | 1.92e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 7100 |
| time_elapsed | 119 |
| total_timesteps | 35500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 7099 |
| policy_loss | 2.47e+08 |
| std | 0.975 |
| value_loss | 4.03e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 7200 |
| time_elapsed | 120 |
| total_timesteps | 36000 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 7199 |
| policy_loss | 3.3e+08 |
| std | 0.975 |
| value_loss | 8.02e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 7300 |
| time_elapsed | 122 |
| total_timesteps | 36500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 7299 |
| policy_loss | 3.68e+08 |
| std | 0.975 |
| value_loss | 1.05e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 298 |
| iterations | 7400 |
| time_elapsed | 124 |
| total_timesteps | 37000 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 1.79e-07 |
| learning_rate | 0.0002 |
| n_updates | 7399 |
| policy_loss | 6.75e+08 |
| std | 0.975 |
| value_loss | 2.81e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 298 |
| iterations | 7500 |
| time_elapsed | 125 |
| total_timesteps | 37500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 7499 |
| policy_loss | 6.98e+08 |
| std | 0.975 |
| value_loss | 4.02e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5463707.333840418
Sharpe: 0.8998439105824559
=================================
-------------------------------------
| time/ | |
| fps | 297 |
| iterations | 7600 |
| time_elapsed | 127 |
| total_timesteps | 38000 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 7599 |
| policy_loss | 1.51e+08 |
| std | 0.974 |
| value_loss | 1.94e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 7700 |
| time_elapsed | 129 |
| total_timesteps | 38500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 1.79e-07 |
| learning_rate | 0.0002 |
| n_updates | 7699 |
| policy_loss | 2.25e+08 |
| std | 0.974 |
| value_loss | 3.79e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 7800 |
| time_elapsed | 130 |
| total_timesteps | 39000 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 7799 |
| policy_loss | 3.35e+08 |
| std | 0.974 |
| value_loss | 8.94e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 297 |
| iterations | 7900 |
| time_elapsed | 132 |
| total_timesteps | 39500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 7899 |
| policy_loss | 4.16e+08 |
| std | 0.974 |
| value_loss | 1.05e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 8000 |
| time_elapsed | 134 |
| total_timesteps | 40000 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 7999 |
| policy_loss | 5.55e+08 |
| std | 0.974 |
| value_loss | 2.16e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 8100 |
| time_elapsed | 136 |
| total_timesteps | 40500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 8099 |
| policy_loss | 6.54e+08 |
| std | 0.973 |
| value_loss | 2.94e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5167968.235818689
Sharpe: 0.867187881540966
=================================
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 8200 |
| time_elapsed | 137 |
| total_timesteps | 41000 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 8199 |
| policy_loss | 2.03e+08 |
| std | 0.973 |
| value_loss | 2.78e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 297 |
| iterations | 8300 |
| time_elapsed | 139 |
| total_timesteps | 41500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 8299 |
| policy_loss | 2.3e+08 |
| std | 0.973 |
| value_loss | 4.59e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 8400 |
| time_elapsed | 141 |
| total_timesteps | 42000 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 8399 |
| policy_loss | 3.64e+08 |
| std | 0.972 |
| value_loss | 1.08e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 297 |
| iterations | 8500 |
| time_elapsed | 142 |
| total_timesteps | 42500 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 8499 |
| policy_loss | 4.48e+08 |
| std | 0.971 |
| value_loss | 1.39e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 8600 |
| time_elapsed | 144 |
| total_timesteps | 43000 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 8599 |
| policy_loss | 5.65e+08 |
| std | 0.972 |
| value_loss | 2.49e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5295644.3420574665
Sharpe: 0.8781589563976329
=================================
------------------------------------
| time/ | |
| fps | 296 |
| iterations | 8700 |
| time_elapsed | 146 |
| total_timesteps | 43500 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 8699 |
| policy_loss | 1.32e+08 |
| std | 0.972 |
| value_loss | 1.15e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 8800 |
| time_elapsed | 148 |
| total_timesteps | 44000 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 8799 |
| policy_loss | 1.97e+08 |
| std | 0.971 |
| value_loss | 3.06e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 8900 |
| time_elapsed | 149 |
| total_timesteps | 44500 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 8899 |
| policy_loss | 2.8e+08 |
| std | 0.97 |
| value_loss | 5.78e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 297 |
| iterations | 9000 |
| time_elapsed | 151 |
| total_timesteps | 45000 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 8999 |
| policy_loss | 3.58e+08 |
| std | 0.97 |
| value_loss | 1.03e+14 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 297 |
| iterations | 9100 |
| time_elapsed | 153 |
| total_timesteps | 45500 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 9099 |
| policy_loss | 4.32e+08 |
| std | 0.969 |
| value_loss | 1.47e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 9200 |
| time_elapsed | 154 |
| total_timesteps | 46000 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 9199 |
| policy_loss | 5.89e+08 |
| std | 0.969 |
| value_loss | 2.59e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4733644.272087766
Sharpe: 0.8277849696877948
=================================
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 9300 |
| time_elapsed | 156 |
| total_timesteps | 46500 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 9299 |
| policy_loss | 1.47e+08 |
| std | 0.969 |
| value_loss | 1.74e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 297 |
| iterations | 9400 |
| time_elapsed | 158 |
| total_timesteps | 47000 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 9399 |
| policy_loss | 1.9e+08 |
| std | 0.969 |
| value_loss | 2.79e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 9500 |
| time_elapsed | 159 |
| total_timesteps | 47500 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 9499 |
| policy_loss | 3e+08 |
| std | 0.969 |
| value_loss | 6.81e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 9600 |
| time_elapsed | 161 |
| total_timesteps | 48000 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 9599 |
| policy_loss | 3.53e+08 |
| std | 0.968 |
| value_loss | 1.02e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 9700 |
| time_elapsed | 163 |
| total_timesteps | 48500 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 9699 |
| policy_loss | 4.88e+08 |
| std | 0.968 |
| value_loss | 1.81e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 297 |
| iterations | 9800 |
| time_elapsed | 164 |
| total_timesteps | 49000 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 9799 |
| policy_loss | 5.74e+08 |
| std | 0.967 |
| value_loss | 2.77e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4989645.374122748
Sharpe: 0.8500872008767442
=================================
-------------------------------------
| time/ | |
| fps | 297 |
| iterations | 9900 |
| time_elapsed | 166 |
| total_timesteps | 49500 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 9899 |
| policy_loss | 1.91e+08 |
| std | 0.967 |
| value_loss | 2.26e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 296 |
| iterations | 10000 |
| time_elapsed | 168 |
| total_timesteps | 50000 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 9999 |
| policy_loss | 2.36e+08 |
| std | 0.967 |
| value_loss | 4.14e+13 |
------------------------------------
###Markdown
Model 2: **PPO**
###Code
agent = DRLAgent(env = env_train)
PPO_PARAMS = {
"n_steps": 2048,
"ent_coef": 0.005,
"learning_rate": 0.0001,
"batch_size": 128,
}
model_ppo = agent.get_model("ppo",model_kwargs = PPO_PARAMS)
trained_ppo = agent.train_model(model=model_ppo,
tb_log_name='ppo',
total_timesteps=80000)
###Output
Logging to tensorboard_log/ppo/ppo_1
-----------------------------
| time/ | |
| fps | 348 |
| iterations | 1 |
| time_elapsed | 5 |
| total_timesteps | 2048 |
-----------------------------
=================================
begin_total_asset:1000000
end_total_asset:5407408.522645024
Sharpe: 0.8945150154722756
=================================
---------------------------------------
| time/ | |
| fps | 314 |
| iterations | 2 |
| time_elapsed | 13 |
| total_timesteps | 4096 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0001 |
| loss | 8.62e+14 |
| n_updates | 10 |
| policy_gradient_loss | -4.67e-07 |
| std | 1 |
| value_loss | 1.7e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5506664.127558984
Sharpe: 0.8978272251870139
=================================
---------------------------------------
| time/ | |
| fps | 307 |
| iterations | 3 |
| time_elapsed | 19 |
| total_timesteps | 6144 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0001 |
| loss | 1.56e+15 |
| n_updates | 20 |
| policy_gradient_loss | -5.11e-07 |
| std | 1 |
| value_loss | 3.29e+15 |
---------------------------------------
---------------------------------------
| time/ | |
| fps | 308 |
| iterations | 4 |
| time_elapsed | 26 |
| total_timesteps | 8192 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0001 |
| loss | 2.09e+15 |
| n_updates | 30 |
| policy_gradient_loss | -3.64e-07 |
| std | 1 |
| value_loss | 4.02e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4973206.12529279
Sharpe: 0.852165346616796
=================================
---------------------------------------
| time/ | |
| fps | 307 |
| iterations | 5 |
| time_elapsed | 33 |
| total_timesteps | 10240 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 0 |
| learning_rate | 0.0001 |
| loss | 1.04e+15 |
| n_updates | 40 |
| policy_gradient_loss | -4.76e-07 |
| std | 1 |
| value_loss | 2.29e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4983200.930282411
Sharpe: 0.8548590279297212
=================================
---------------------------------------
| time/ | |
| fps | 305 |
| iterations | 6 |
| time_elapsed | 40 |
| total_timesteps | 12288 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0001 |
| loss | 9.92e+14 |
| n_updates | 50 |
| policy_gradient_loss | -5.26e-07 |
| std | 1 |
| value_loss | 2.31e+15 |
---------------------------------------
---------------------------------------
| time/ | |
| fps | 306 |
| iterations | 7 |
| time_elapsed | 46 |
| total_timesteps | 14336 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0001 |
| loss | 1.42e+15 |
| n_updates | 60 |
| policy_gradient_loss | -4.58e-07 |
| std | 1 |
| value_loss | 3.17e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5004305.2400275655
Sharpe: 0.8580894170419451
=================================
---------------------------------------
| time/ | |
| fps | 304 |
| iterations | 8 |
| time_elapsed | 53 |
| total_timesteps | 16384 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0001 |
| loss | 1.75e+15 |
| n_updates | 70 |
| policy_gradient_loss | -3.64e-07 |
| std | 1 |
| value_loss | 3.36e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4767293.424762546
Sharpe: 0.8327081862255113
=================================
---------------------------------------
| time/ | |
| fps | 302 |
| iterations | 9 |
| time_elapsed | 60 |
| total_timesteps | 18432 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 1.79e-07 |
| learning_rate | 0.0001 |
| loss | 7.95e+14 |
| n_updates | 80 |
| policy_gradient_loss | -6.24e-07 |
| std | 1 |
| value_loss | 1.53e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5718116.186481766
Sharpe: 0.9168397946401206
=================================
---------------------------------------
| time/ | |
| fps | 302 |
| iterations | 10 |
| time_elapsed | 67 |
| total_timesteps | 20480 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 0 |
| learning_rate | 0.0001 |
| loss | 1.46e+15 |
| n_updates | 90 |
| policy_gradient_loss | -4.56e-07 |
| std | 1 |
| value_loss | 2.72e+15 |
---------------------------------------
--------------------------------------
| time/ | |
| fps | 302 |
| iterations | 11 |
| time_elapsed | 74 |
| total_timesteps | 22528 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 0 |
| learning_rate | 0.0001 |
| loss | 2.14e+15 |
| n_updates | 100 |
| policy_gradient_loss | -2.9e-07 |
| std | 1 |
| value_loss | 4.48e+15 |
--------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5160448.888370996
Sharpe: 0.8687380292111129
=================================
---------------------------------------
| time/ | |
| fps | 300 |
| iterations | 12 |
| time_elapsed | 81 |
| total_timesteps | 24576 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0001 |
| loss | 1.05e+15 |
| n_updates | 110 |
| policy_gradient_loss | -4.37e-07 |
| std | 1 |
| value_loss | 2.13e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5037083.305754823
Sharpe: 0.8608949299257355
=================================
---------------------------------------
| time/ | |
| fps | 299 |
| iterations | 13 |
| time_elapsed | 89 |
| total_timesteps | 26624 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0001 |
| loss | 1.57e+15 |
| n_updates | 120 |
| policy_gradient_loss | -5.12e-07 |
| std | 1 |
| value_loss | 2.69e+15 |
---------------------------------------
---------------------------------------
| time/ | |
| fps | 299 |
| iterations | 14 |
| time_elapsed | 95 |
| total_timesteps | 28672 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0001 |
| loss | 1.75e+15 |
| n_updates | 130 |
| policy_gradient_loss | -4.11e-07 |
| std | 1 |
| value_loss | 3.16e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5298096.091470836
Sharpe: 0.8835774791782369
=================================
---------------------------------------
| time/ | |
| fps | 298 |
| iterations | 15 |
| time_elapsed | 103 |
| total_timesteps | 30720 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0001 |
| loss | 1.43e+15 |
| n_updates | 140 |
| policy_gradient_loss | -4.82e-07 |
| std | 1 |
| value_loss | 3.08e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4961477.565456702
Sharpe: 0.8562661606561773
=================================
---------------------------------------
| time/ | |
| fps | 298 |
| iterations | 16 |
| time_elapsed | 109 |
| total_timesteps | 32768 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0001 |
| loss | 1e+15 |
| n_updates | 150 |
| policy_gradient_loss | -5.88e-07 |
| std | 1 |
| value_loss | 2.03e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5527273.862226817
Sharpe: 0.9011641451672711
=================================
---------------------------------------
| time/ | |
| fps | 297 |
| iterations | 17 |
| time_elapsed | 116 |
| total_timesteps | 34816 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 0 |
| learning_rate | 0.0001 |
| loss | 1.52e+15 |
| n_updates | 160 |
| policy_gradient_loss | -3.48e-07 |
| std | 1 |
| value_loss | 3.22e+15 |
---------------------------------------
---------------------------------------
| time/ | |
| fps | 296 |
| iterations | 18 |
| time_elapsed | 124 |
| total_timesteps | 36864 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 0 |
| learning_rate | 0.0001 |
| loss | 2.13e+15 |
| n_updates | 170 |
| policy_gradient_loss | -3.79e-07 |
| std | 1 |
| value_loss | 4.35e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5181675.476854653
Sharpe: 0.8666935031584689
=================================
---------------------------------------
| time/ | |
| fps | 295 |
| iterations | 19 |
| time_elapsed | 131 |
| total_timesteps | 38912 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0001 |
| loss | 8.44e+14 |
| n_updates | 180 |
| policy_gradient_loss | -4.85e-07 |
| std | 1 |
| value_loss | 1.79e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4801715.467888956
Sharpe: 0.8392510886342651
=================================
---------------------------------------
| time/ | |
| fps | 295 |
| iterations | 20 |
| time_elapsed | 138 |
| total_timesteps | 40960 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0001 |
| loss | 1.38e+15 |
| n_updates | 190 |
| policy_gradient_loss | -5.06e-07 |
| std | 1 |
| value_loss | 2.86e+15 |
---------------------------------------
---------------------------------------
| time/ | |
| fps | 295 |
| iterations | 21 |
| time_elapsed | 145 |
| total_timesteps | 43008 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0001 |
| loss | 1.62e+15 |
| n_updates | 200 |
| policy_gradient_loss | -4.68e-07 |
| std | 1 |
| value_loss | 3.1e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4968176.3422929365
Sharpe: 0.853754454109652
=================================
---------------------------------------
| time/ | |
| fps | 293 |
| iterations | 22 |
| time_elapsed | 153 |
| total_timesteps | 45056 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0001 |
| loss | 1.11e+15 |
| n_updates | 210 |
| policy_gradient_loss | -5.42e-07 |
| std | 1 |
| value_loss | 2.32e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5333210.09190493
Sharpe: 0.8892998023531967
=================================
---------------------------------------
| time/ | |
| fps | 293 |
| iterations | 23 |
| time_elapsed | 160 |
| total_timesteps | 47104 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0001 |
| loss | 1.03e+15 |
| n_updates | 220 |
| policy_gradient_loss | -5.01e-07 |
| std | 1 |
| value_loss | 2.26e+15 |
---------------------------------------
---------------------------------------
| time/ | |
| fps | 293 |
| iterations | 24 |
| time_elapsed | 167 |
| total_timesteps | 49152 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 0 |
| learning_rate | 0.0001 |
| loss | 1.94e+15 |
| n_updates | 230 |
| policy_gradient_loss | -3.64e-07 |
| std | 1 |
| value_loss | 3.71e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5028653.28685496
Sharpe: 0.8567058309087507
=================================
---------------------------------------
| time/ | |
| fps | 293 |
| iterations | 25 |
| time_elapsed | 174 |
| total_timesteps | 51200 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 0 |
| learning_rate | 0.0001 |
| loss | 1.75e+15 |
| n_updates | 240 |
| policy_gradient_loss | -3.74e-07 |
| std | 1 |
| value_loss | 3.8e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5396539.202192561
Sharpe: 0.8909163947432485
=================================
---------------------------------------
| time/ | |
| fps | 293 |
| iterations | 26 |
| time_elapsed | 181 |
| total_timesteps | 53248 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 0 |
| learning_rate | 0.0001 |
| loss | 8.89e+14 |
| n_updates | 250 |
| policy_gradient_loss | -6.69e-07 |
| std | 1 |
| value_loss | 1.66e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4983470.113884904
Sharpe: 0.855726983366501
=================================
---------------------------------------
| time/ | |
| fps | 293 |
| iterations | 27 |
| time_elapsed | 188 |
| total_timesteps | 55296 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 0 |
| learning_rate | 0.0001 |
| loss | 1.57e+15 |
| n_updates | 260 |
| policy_gradient_loss | -4.51e-07 |
| std | 1 |
| value_loss | 3.26e+15 |
---------------------------------------
---------------------------------------
| time/ | |
| fps | 294 |
| iterations | 28 |
| time_elapsed | 194 |
| total_timesteps | 57344 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0001 |
| loss | 1.62e+15 |
| n_updates | 270 |
| policy_gradient_loss | -2.94e-07 |
| std | 1 |
| value_loss | 3.53e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5077533.069990031
Sharpe: 0.8698859623755926
=================================
---------------------------------------
| time/ | |
| fps | 294 |
| iterations | 29 |
| time_elapsed | 201 |
| total_timesteps | 59392 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 0 |
| learning_rate | 0.0001 |
| loss | 1.04e+15 |
| n_updates | 280 |
| policy_gradient_loss | -3.94e-07 |
| std | 1 |
| value_loss | 2.26e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4718933.011698665
Sharpe: 0.8300365631029007
=================================
---------------------------------------
| time/ | |
| fps | 294 |
| iterations | 30 |
| time_elapsed | 208 |
| total_timesteps | 61440 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0001 |
| loss | 1.46e+15 |
| n_updates | 290 |
| policy_gradient_loss | -4.45e-07 |
| std | 1 |
| value_loss | 2.48e+15 |
---------------------------------------
---------------------------------------
| time/ | |
| fps | 294 |
| iterations | 31 |
| time_elapsed | 215 |
| total_timesteps | 63488 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0001 |
| loss | 1.67e+15 |
| n_updates | 300 |
| policy_gradient_loss | -4.25e-07 |
| std | 1 |
| value_loss | 3.25e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5400644.928048184
Sharpe: 0.8915105700881634
=================================
---------------------------------------
| time/ | |
| fps | 294 |
| iterations | 32 |
| time_elapsed | 222 |
| total_timesteps | 65536 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 0 |
| learning_rate | 0.0001 |
| loss | 1.81e+15 |
| n_updates | 310 |
| policy_gradient_loss | -3.69e-07 |
| std | 1 |
| value_loss | 3.63e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5244251.123995519
Sharpe: 0.8820405593077815
=================================
---------------------------------------
| time/ | |
| fps | 294 |
| iterations | 33 |
| time_elapsed | 229 |
| total_timesteps | 67584 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 0 |
| learning_rate | 0.0001 |
| loss | 1.05e+15 |
| n_updates | 320 |
| policy_gradient_loss | -5.81e-07 |
| std | 1 |
| value_loss | 1.94e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5323679.181851208
Sharpe: 0.8782748494629397
=================================
---------------------------------------
| time/ | |
| fps | 293 |
| iterations | 34 |
| time_elapsed | 236 |
| total_timesteps | 69632 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 0 |
| learning_rate | 0.0001 |
| loss | 1.91e+15 |
| n_updates | 330 |
| policy_gradient_loss | -4.24e-07 |
| std | 1 |
| value_loss | 3.54e+15 |
---------------------------------------
---------------------------------------
| time/ | |
| fps | 294 |
| iterations | 35 |
| time_elapsed | 243 |
| total_timesteps | 71680 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 0 |
| learning_rate | 0.0001 |
| loss | 1.97e+15 |
| n_updates | 340 |
| policy_gradient_loss | -3.79e-07 |
| std | 1 |
| value_loss | 4.1e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5231003.911583127
Sharpe: 0.8742792085154348
=================================
---------------------------------------
| time/ | |
| fps | 294 |
| iterations | 36 |
| time_elapsed | 250 |
| total_timesteps | 73728 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 1.79e-07 |
| learning_rate | 0.0001 |
| loss | 1.06e+15 |
| n_updates | 350 |
| policy_gradient_loss | -5.38e-07 |
| std | 1 |
| value_loss | 2.07e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5431876.339119544
Sharpe: 0.8973709821852228
=================================
---------------------------------------
| time/ | |
| fps | 293 |
| iterations | 37 |
| time_elapsed | 258 |
| total_timesteps | 75776 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0001 |
| loss | 1.6e+15 |
| n_updates | 360 |
| policy_gradient_loss | -4.84e-07 |
| std | 1 |
| value_loss | 3.03e+15 |
---------------------------------------
---------------------------------------
| time/ | |
| fps | 293 |
| iterations | 38 |
| time_elapsed | 265 |
| total_timesteps | 77824 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 0 |
| learning_rate | 0.0001 |
| loss | 1.85e+15 |
| n_updates | 370 |
| policy_gradient_loss | -3.88e-07 |
| std | 1 |
| value_loss | 3.85e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5174445.4087701775
Sharpe: 0.8746583729733338
=================================
---------------------------------------
| time/ | |
| fps | 292 |
| iterations | 39 |
| time_elapsed | 272 |
| total_timesteps | 79872 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0001 |
| loss | 1.42e+15 |
| n_updates | 380 |
| policy_gradient_loss | -4.64e-07 |
| std | 1 |
| value_loss | 2.86e+15 |
---------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4789004.776252521
Sharpe: 0.8357083479334512
=================================
---------------------------------------
| time/ | |
| fps | 292 |
| iterations | 40 |
| time_elapsed | 280 |
| total_timesteps | 81920 |
| train/ | |
| approx_kl | 0.0 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | 0 |
| learning_rate | 0.0001 |
| loss | 8.65e+14 |
| n_updates | 390 |
| policy_gradient_loss | -6.17e-07 |
| std | 1 |
| value_loss | 2e+15 |
---------------------------------------
###Markdown
Model 3: **DDPG**
###Code
agent = DRLAgent(env = env_train)
DDPG_PARAMS = {"batch_size": 128, "buffer_size": 50000, "learning_rate": 0.001}
model_ddpg = agent.get_model("ddpg",model_kwargs = DDPG_PARAMS)
trained_ddpg = agent.train_model(model=model_ddpg,
tb_log_name='ddpg',
total_timesteps=50000)
###Output
Logging to tensorboard_log/ddpg/ddpg_2
=================================
begin_total_asset:1000000
end_total_asset:4625995.900359718
Sharpe: 1.040202670783119
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 4 |
| fps | 22 |
| time_elapsed | 439 |
| total timesteps | 10064 |
| train/ | |
| actor_loss | -6.99e+07 |
| critic_loss | 7.27e+12 |
| learning_rate | 0.001 |
| n_updates | 7548 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 8 |
| fps | 20 |
| time_elapsed | 980 |
| total timesteps | 20128 |
| train/ | |
| actor_loss | -1.44e+08 |
| critic_loss | 1.81e+13 |
| learning_rate | 0.001 |
| n_updates | 17612 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 12 |
| fps | 19 |
| time_elapsed | 1542 |
| total timesteps | 30192 |
| train/ | |
| actor_loss | -1.88e+08 |
| critic_loss | 2.72e+13 |
| learning_rate | 0.001 |
| n_updates | 27676 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 16 |
| fps | 18 |
| time_elapsed | 2133 |
| total timesteps | 40256 |
| train/ | |
| actor_loss | -2.15e+08 |
| critic_loss | 3.45e+13 |
| learning_rate | 0.001 |
| n_updates | 37740 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
---------------------------------
| time/ | |
| episodes | 20 |
| fps | 17 |
| time_elapsed | 2874 |
| total timesteps | 50320 |
| train/ | |
| actor_loss | -2.3e+08 |
| critic_loss | 4.05e+13 |
| learning_rate | 0.001 |
| n_updates | 47804 |
---------------------------------
###Markdown
Model 4: **SAC**
###Code
agent = DRLAgent(env = env_train)
SAC_PARAMS = {
"batch_size": 128,
"buffer_size": 100000,
"learning_rate": 0.0003,
"learning_starts": 100,
"ent_coef": "auto_0.1",
}
model_sac = agent.get_model("sac",model_kwargs = SAC_PARAMS)
trained_sac = agent.train_model(model=model_sac,
tb_log_name='sac',
total_timesteps=50000)
###Output
Logging to tensorboard_log/sac/sac_1
=================================
begin_total_asset:1000000
end_total_asset:4449463.498168942
Sharpe: 1.01245667390232
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418643.239765096
Sharpe: 1.0135796594260282
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418644.1960784905
Sharpe: 1.0135797537524718
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418659.429680678
Sharpe: 1.013581852537709
=================================
----------------------------------
| time/ | |
| episodes | 4 |
| fps | 12 |
| time_elapsed | 783 |
| total timesteps | 10064 |
| train/ | |
| actor_loss | -8.83e+07 |
| critic_loss | 6.57e+12 |
| ent_coef | 2.24 |
| ent_coef_loss | -205 |
| learning_rate | 0.0003 |
| n_updates | 9963 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4418651.576406099
Sharpe: 1.013581224026754
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418670.948269031
Sharpe: 1.0135838030234754
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418682.278829884
Sharpe: 1.013585596968056
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418791.911955293
Sharpe: 1.0136007328171013
=================================
----------------------------------
| time/ | |
| episodes | 8 |
| fps | 12 |
| time_elapsed | 1585 |
| total timesteps | 20128 |
| train/ | |
| actor_loss | -1.51e+08 |
| critic_loss | 1.12e+13 |
| ent_coef | 41.7 |
| ent_coef_loss | -670 |
| learning_rate | 0.0003 |
| n_updates | 20027 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4418737.365107464
Sharpe: 1.0135970410224868
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418754.895735274
Sharpe: 1.0135965589029627
=================================
=================================
begin_total_asset:1000000
end_total_asset:4419325.814567342
Sharpe: 1.0136807224228588
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418142.473513333
Sharpe: 1.0135234795926031
=================================
----------------------------------
| time/ | |
| episodes | 12 |
| fps | 12 |
| time_elapsed | 2400 |
| total timesteps | 30192 |
| train/ | |
| actor_loss | -1.85e+08 |
| critic_loss | 1.87e+13 |
| ent_coef | 725 |
| ent_coef_loss | -673 |
| learning_rate | 0.0003 |
| n_updates | 30091 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4422046.188863339
Sharpe: 1.0140936726052256
=================================
=================================
begin_total_asset:1000000
end_total_asset:4424919.463828854
Sharpe: 1.014521127041106
=================================
=================================
begin_total_asset:1000000
end_total_asset:4427483.152494239
Sharpe: 1.0148626804754584
=================================
=================================
begin_total_asset:1000000
end_total_asset:4460697.650185859
Sharpe: 1.019852362102548
=================================
----------------------------------
| time/ | |
| episodes | 16 |
| fps | 12 |
| time_elapsed | 3210 |
| total timesteps | 40256 |
| train/ | |
| actor_loss | -1.93e+08 |
| critic_loss | 1.62e+13 |
| ent_coef | 1.01e+04 |
| ent_coef_loss | -238 |
| learning_rate | 0.0003 |
| n_updates | 40155 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4434035.982803257
Sharpe: 1.0161512551319891
=================================
=================================
begin_total_asset:1000000
end_total_asset:4454728.906041551
Sharpe: 1.018484863448905
=================================
=================================
begin_total_asset:1000000
end_total_asset:4475667.120269234
Sharpe: 1.0215545521682856
=================================
###Markdown
Model 5: **TD3**
###Code
agent = DRLAgent(env = env_train)
TD3_PARAMS = {"batch_size": 100,
"buffer_size": 1000000,
"learning_rate": 0.001}
model_td3 = agent.get_model("td3",model_kwargs = TD3_PARAMS)
trained_td3 = agent.train_model(model=model_td3,
tb_log_name='td3',
total_timesteps=30000)
###Output
Logging to tensorboard_log/td3/td3_1
=================================
begin_total_asset:1000000
end_total_asset:5232441.848437611
Sharpe: 0.8749907118878204
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
----------------------------------
| time/ | |
| episodes | 4 |
| fps | 25 |
| time_elapsed | 445 |
| total timesteps | 11572 |
| train/ | |
| actor_loss | -4.69e+07 |
| critic_loss | 1.08e+13 |
| learning_rate | 0.001 |
| n_updates | 8679 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
----------------------------------
| time/ | |
| episodes | 8 |
| fps | 23 |
| time_elapsed | 985 |
| total timesteps | 23144 |
| train/ | |
| actor_loss | -1.05e+08 |
| critic_loss | 2.77e+13 |
| learning_rate | 0.001 |
| n_updates | 20251 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
###Markdown
TradingAssume that we have $1,000,000 initial capital at 2019-01-01. We use the DDPG model to trade Dow jones 30 stocks.
###Code
trade = data_split(df,'2020-07-01', '2021-07-01')
e_trade_gym = StockPortfolioEnv(df = trade, **env_kwargs)
trade.shape
df_daily_return, df_actions = DRLAgent.DRL_prediction(model=trained_a2c,
environment = e_trade_gym)
df_daily_return.head()
df_daily_return.to_csv('df_daily_return.csv')
df_actions.head()
df_actions.to_csv('df_actions.csv')
###Output
_____no_output_____
###Markdown
Part 7: Backtest Our StrategyBacktesting plays a key role in evaluating the performance of a trading strategy. Automated backtesting tool is preferred because it reduces the human error. We usually use the Quantopian pyfolio package to backtest our trading strategies. It is easy to use and consists of various individual plots that provide a comprehensive image of the performance of a trading strategy. 7.1 BackTestStatspass in df_account_value, this information is stored in env class
###Code
from pyfolio import timeseries
DRL_strat = convert_daily_return_to_pyfolio_ts(df_daily_return)
perf_func = timeseries.perf_stats
perf_stats_all = perf_func( returns=DRL_strat,
factor_returns=DRL_strat,
positions=None, transactions=None, turnover_denom="AGB")
print("==============DRL Strategy Stats===========")
perf_stats_all
#baseline stats
print("==============Get Baseline Stats===========")
baseline_df = get_baseline(
ticker="^DJI",
start = df_daily_return.loc[0,'date'],
end = df_daily_return.loc[len(df_daily_return)-1,'date'])
stats = backtest_stats(baseline_df, value_col_name = 'close')
###Output
==============Get Baseline Stats===========
[*********************100%***********************] 1 of 1 completed
Shape of DataFrame: (251, 8)
Annual return 0.334042
Cumulative returns 0.332517
Annual volatility 0.146033
Sharpe ratio 2.055458
Calmar ratio 3.740347
Stability 0.945402
Max drawdown -0.089308
Omega ratio 1.408111
Sortino ratio 3.075978
Skew NaN
Kurtosis NaN
Tail ratio 1.078766
Daily value at risk -0.017207
dtype: float64
###Markdown
7.2 BackTestPlot
###Code
import pyfolio
%matplotlib inline
baseline_df = get_baseline(
ticker='^DJI', start=df_daily_return.loc[0,'date'], end='2021-07-01'
)
baseline_returns = get_daily_return(baseline_df, value_col_name="close")
with pyfolio.plotting.plotting_context(font_scale=1.1):
pyfolio.create_full_tear_sheet(returns = DRL_strat,
benchmark_rets=baseline_returns, set_context=False)
###Output
_____no_output_____
###Markdown
Min-Variance Portfolio Allocation
###Code
!pip install PyPortfolioOpt
from pypfopt.efficient_frontier import EfficientFrontier
from pypfopt import risk_models
unique_tic = trade.tic.unique()
unique_trade_date = trade.date.unique()
df.head()
#calculate_portfolio_minimum_variance
portfolio = pd.DataFrame(index = range(1), columns = unique_trade_date)
initial_capital = 1000000
portfolio.loc[0,unique_trade_date[0]] = initial_capital
for i in range(len( unique_trade_date)-1):
df_temp = df[df.date==unique_trade_date[i]].reset_index(drop=True)
df_temp_next = df[df.date==unique_trade_date[i+1]].reset_index(drop=True)
#Sigma = risk_models.sample_cov(df_temp.return_list[0])
#calculate covariance matrix
Sigma = df_temp.return_list[0].cov()
#portfolio allocation
ef_min_var = EfficientFrontier(None, Sigma,weight_bounds=(0, 0.1))
#minimum variance
raw_weights_min_var = ef_min_var.min_volatility()
#get weights
cleaned_weights_min_var = ef_min_var.clean_weights()
#current capital
cap = portfolio.iloc[0, i]
#current cash invested for each stock
current_cash = [element * cap for element in list(cleaned_weights_min_var.values())]
# current held shares
current_shares = list(np.array(current_cash)
/ np.array(df_temp.close))
# next time period price
next_price = np.array(df_temp_next.close)
##next_price * current share to calculate next total account value
portfolio.iloc[0, i+1] = np.dot(current_shares, next_price)
portfolio=portfolio.T
portfolio.columns = ['account_value']
portfolio.head()
a2c_cumpod =(df_daily_return.daily_return+1).cumprod()-1
min_var_cumpod =(portfolio.account_value.pct_change()+1).cumprod()-1
dji_cumpod =(baseline_returns+1).cumprod()-1
###Output
_____no_output_____
###Markdown
Plotly: DRL, Min-Variance, DJIA
###Code
from datetime import datetime as dt
import matplotlib.pyplot as plt
import plotly
import plotly.graph_objs as go
time_ind = pd.Series(df_daily_return.date)
trace0_portfolio = go.Scatter(x = time_ind, y = a2c_cumpod, mode = 'lines', name = 'A2C (Portfolio Allocation)')
trace1_portfolio = go.Scatter(x = time_ind, y = dji_cumpod, mode = 'lines', name = 'DJIA')
trace2_portfolio = go.Scatter(x = time_ind, y = min_var_cumpod, mode = 'lines', name = 'Min-Variance')
#trace3_portfolio = go.Scatter(x = time_ind, y = ddpg_cumpod, mode = 'lines', name = 'DDPG')
#trace4_portfolio = go.Scatter(x = time_ind, y = addpg_cumpod, mode = 'lines', name = 'Adaptive-DDPG')
#trace5_portfolio = go.Scatter(x = time_ind, y = min_cumpod, mode = 'lines', name = 'Min-Variance')
#trace4 = go.Scatter(x = time_ind, y = addpg_cumpod, mode = 'lines', name = 'Adaptive-DDPG')
#trace2 = go.Scatter(x = time_ind, y = portfolio_cost_minv, mode = 'lines', name = 'Min-Variance')
#trace3 = go.Scatter(x = time_ind, y = spx_value, mode = 'lines', name = 'SPX')
fig = go.Figure()
fig.add_trace(trace0_portfolio)
fig.add_trace(trace1_portfolio)
fig.add_trace(trace2_portfolio)
fig.update_layout(
legend=dict(
x=0,
y=1,
traceorder="normal",
font=dict(
family="sans-serif",
size=15,
color="black"
),
bgcolor="White",
bordercolor="white",
borderwidth=2
),
)
#fig.update_layout(legend_orientation="h")
fig.update_layout(title={
#'text': "Cumulative Return using FinRL",
'y':0.85,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'})
#with Transaction cost
#fig.update_layout(title = 'Quarterly Trade Date')
fig.update_layout(
# margin=dict(l=20, r=20, t=20, b=20),
paper_bgcolor='rgba(1,1,0,0)',
plot_bgcolor='rgba(1, 1, 0, 0)',
#xaxis_title="Date",
yaxis_title="Cumulative Return",
xaxis={'type': 'date',
'tick0': time_ind[0],
'tickmode': 'linear',
'dtick': 86400000.0 *80}
)
fig.update_xaxes(showline=True,linecolor='black',showgrid=True, gridwidth=1, gridcolor='LightSteelBlue',mirror=True)
fig.update_yaxes(showline=True,linecolor='black',showgrid=True, gridwidth=1, gridcolor='LightSteelBlue',mirror=True)
fig.update_yaxes(zeroline=True, zerolinewidth=1, zerolinecolor='LightSteelBlue')
fig.show()
###Output
_____no_output_____
###Markdown
Deep Reinforcement Learning for Stock Trading from Scratch: Portfolio AllocationTutorials to use OpenAI DRL to perform portfolio allocation in one Jupyter Notebook | Presented at NeurIPS 2020: Deep RL Workshop* This blog is based on our paper: FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance, presented at NeurIPS 2020: Deep RL Workshop.* Check out medium blog for detailed explanations: * Please report any issues to our Github: https://github.com/AI4Finance-LLC/FinRL-Library/issues* **Pytorch Version** Content * [1. Problem Definition](0)* [2. Getting Started - Load Python packages](1) * [2.1. Install Packages](1.1) * [2.2. Check Additional Packages](1.2) * [2.3. Import Packages](1.3) * [2.4. Create Folders](1.4)* [3. Download Data](2)* [4. Preprocess Data](3) * [4.1. Technical Indicators](3.1) * [4.2. Perform Feature Engineering](3.2)* [5.Build Environment](4) * [5.1. Training & Trade Data Split](4.1) * [5.2. User-defined Environment](4.2) * [5.3. Initialize Environment](4.3) * [6.Implement DRL Algorithms](5) * [7.Backtesting Performance](6) * [7.1. BackTestStats](6.1) * [7.2. BackTestPlot](6.2) * [7.3. Baseline Stats](6.3) * [7.3. Compare to Stock Market Index](6.4) Part 1. Problem Definition This problem is to design an automated trading solution for single stock trading. We model the stock trading process as a Markov Decision Process (MDP). We then formulate our trading goal as a maximization problem.The algorithm is trained using Deep Reinforcement Learning (DRL) algorithms and the components of the reinforcement learning environment are:* Action: The action space describes the allowed actions that the agent interacts with theenvironment. Normally, a ∈ A includes three actions: a ∈ {−1, 0, 1}, where −1, 0, 1 representselling, holding, and buying one stock. Also, an action can be carried upon multiple shares. We usean action space {−k, ..., −1, 0, 1, ..., k}, where k denotes the number of shares. For example, "Buy10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or −10, respectively* Reward function: r(s, a, s′) is the incentive mechanism for an agent to learn a better action. The change of the portfolio value when action a is taken at state s and arriving at new state s', i.e., r(s, a, s′) = v′ − v, where v′ and v represent the portfoliovalues at state s′ and s, respectively* State: The state space describes the observations that the agent receives from the environment. Just as a human trader needs to analyze various information before executing a trade, soour trading agent observes many different features to better learn in an interactive environment.* Environment: Dow 30 consituentsThe data of the single stock that we will be using for this case study is obtained from Yahoo Finance API. The data contains Open-High-Low-Close price and volume. Part 2. Getting Started- Load Python Packages 2.1. Install all the packages through FinRL library
###Code
## install finrl library
!pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
###Output
Collecting git+https://github.com/AI4Finance-LLC/FinRL-Library.git
Cloning https://github.com/AI4Finance-LLC/FinRL-Library.git to /tmp/pip-req-build-bwdyljxc
Running command git clone -q https://github.com/AI4Finance-LLC/FinRL-Library.git /tmp/pip-req-build-bwdyljxc
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (1.19.5)
Requirement already satisfied: pandas>=1.1.5 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (1.1.5)
Collecting stockstats
Downloading https://files.pythonhosted.org/packages/32/41/d3828c5bc0a262cb3112a4024108a3b019c183fa3b3078bff34bf25abf91/stockstats-0.3.2-py2.py3-none-any.whl
Collecting yfinance
Downloading https://files.pythonhosted.org/packages/7a/e8/b9d7104d3a4bf39924799067592d9e59119fcfc900a425a12e80a3123ec8/yfinance-0.1.55.tar.gz
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (3.2.2)
Requirement already satisfied: scikit-learn>=0.21.0 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (0.22.2.post1)
Requirement already satisfied: gym>=0.17 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (0.17.3)
Collecting stable-baselines3[extra]
[?25l Downloading https://files.pythonhosted.org/packages/76/7c/ec89fd9a51c2ff640f150479069be817136c02f02349b5dd27a6e3bb8b3d/stable_baselines3-0.10.0-py3-none-any.whl (145kB)
[K |████████████████████████████████| 153kB 6.0MB/s
[?25hRequirement already satisfied: pytest in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (3.6.4)
Requirement already satisfied: setuptools>=41.4.0 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (53.0.0)
Requirement already satisfied: wheel>=0.33.6 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (0.36.2)
Collecting pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2
Cloning https://github.com/quantopian/pyfolio.git to /tmp/pip-install-jk1inqx3/pyfolio
Running command git clone -q https://github.com/quantopian/pyfolio.git /tmp/pip-install-jk1inqx3/pyfolio
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=1.1.5->finrl==0.0.3) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=1.1.5->finrl==0.0.3) (2018.9)
Collecting int-date>=0.1.7
Downloading https://files.pythonhosted.org/packages/43/27/31803df15173ab341fe7548c14154b54227dfd8f630daa09a1c6e7db52f7/int_date-0.1.8-py2.py3-none-any.whl
Requirement already satisfied: requests>=2.20 in /usr/local/lib/python3.7/dist-packages (from yfinance->finrl==0.0.3) (2.23.0)
Requirement already satisfied: multitasking>=0.0.7 in /usr/local/lib/python3.7/dist-packages (from yfinance->finrl==0.0.3) (0.0.9)
Collecting lxml>=4.5.1
[?25l Downloading https://files.pythonhosted.org/packages/d2/88/b25778f17e5320c1c58f8c5060fb5b037288e162bd7554c30799e9ea90db/lxml-4.6.2-cp37-cp37m-manylinux1_x86_64.whl (5.5MB)
[K |████████████████████████████████| 5.5MB 8.8MB/s
[?25hRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.0.3) (2.4.7)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.0.3) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.0.3) (1.3.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.0->finrl==0.0.3) (1.0.1)
Requirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.0->finrl==0.0.3) (1.4.1)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym>=0.17->finrl==0.0.3) (1.5.0)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym>=0.17->finrl==0.0.3) (1.3.0)
Requirement already satisfied: torch>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (1.7.0+cu101)
Requirement already satisfied: tensorboard; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (2.4.1)
Requirement already satisfied: psutil; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (5.4.8)
Requirement already satisfied: opencv-python; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (4.1.2.30)
Requirement already satisfied: pillow; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (7.0.0)
Requirement already satisfied: atari-py~=0.2.0; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (0.2.6)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (1.15.0)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (20.3.0)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (8.7.0)
Requirement already satisfied: pluggy<0.8,>=0.5 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (0.7.1)
Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (1.10.0)
Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (1.4.0)
Requirement already satisfied: ipython>=3.2.3 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (5.5.0)
Requirement already satisfied: seaborn>=0.7.1 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.11.1)
Collecting empyrical>=0.5.0
[?25l Downloading https://files.pythonhosted.org/packages/74/43/1b997c21411c6ab7c96dc034e160198272c7a785aeea7654c9bcf98bec83/empyrical-0.5.5.tar.gz (52kB)
[K |████████████████████████████████| 61kB 6.1MB/s
[?25hRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.0.3) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.0.3) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.0.3) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.0.3) (2020.12.5)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym>=0.17->finrl==0.0.3) (0.16.0)
Requirement already satisfied: dataclasses in /usr/local/lib/python3.7/dist-packages (from torch>=1.4.0->stable-baselines3[extra]->finrl==0.0.3) (0.6)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch>=1.4.0->stable-baselines3[extra]->finrl==0.0.3) (3.7.4.3)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.12.4)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.32.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (0.4.2)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.3.3)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (0.10.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.8.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.0.1)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.27.0)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.7.5)
Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (4.8.0)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (1.0.18)
Requirement already satisfied: pygments in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (2.6.1)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (4.3.3)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.8.1)
Requirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (4.4.2)
Requirement already satisfied: pandas-datareader>=0.2 in /usr/local/lib/python3.7/dist-packages (from empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.9.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.3.0)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.4.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (0.2.8)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3.6" in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (4.7.1)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (4.2.1)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.7/dist-packages (from pexpect; sys_platform != "win32"->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.7.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.2.5)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.7/dist-packages (from traitlets>=4.2->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.2.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.4.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (0.4.8)
Building wheels for collected packages: finrl, yfinance, pyfolio, empyrical
Building wheel for finrl (setup.py) ... [?25l[?25hdone
Created wheel for finrl: filename=finrl-0.0.3-cp37-none-any.whl size=38201 sha256=680913f069c396f38e0c508600b450102190f08e0b0bba53c58c334981ccbe6c
Stored in directory: /tmp/pip-ephem-wheel-cache-a1bbwmjm/wheels/9c/19/bf/c644def96612df1ad42c94d5304966797eaa3221dffc5efe0b
Building wheel for yfinance (setup.py) ... [?25l[?25hdone
Created wheel for yfinance: filename=yfinance-0.1.55-py2.py3-none-any.whl size=22616 sha256=2a578f51d56d3d8fff23683c041d6815f487abf3c6c97d4739d122055a6599b3
Stored in directory: /root/.cache/pip/wheels/04/98/cc/2702a4242d60bdc14f48b4557c427ded1fe92aedf257d4565c
Building wheel for pyfolio (setup.py) ... [?25l[?25hdone
Created wheel for pyfolio: filename=pyfolio-0.9.2+75.g4b901f6-cp37-none-any.whl size=75764 sha256=7e1ceb3360e57235c3d97bdbb36969c8ac05da709aa781413f1eca9088669323
Stored in directory: /tmp/pip-ephem-wheel-cache-a1bbwmjm/wheels/43/ce/d9/6752fb6e03205408773235435205a0519d2c608a94f1976e56
Building wheel for empyrical (setup.py) ... [?25l[?25hdone
Created wheel for empyrical: filename=empyrical-0.5.5-cp37-none-any.whl size=39764 sha256=6b772c8c03b900a08799fdd831ee627277cc2c9241dc3103e2602fdd21781bb1
Stored in directory: /root/.cache/pip/wheels/ea/b2/c8/6769d8444d2f2e608fae2641833110668d0ffd1abeb2e9f3fc
Successfully built finrl yfinance pyfolio empyrical
Installing collected packages: int-date, stockstats, lxml, yfinance, stable-baselines3, empyrical, pyfolio, finrl
Found existing installation: lxml 4.2.6
Uninstalling lxml-4.2.6:
Successfully uninstalled lxml-4.2.6
Successfully installed empyrical-0.5.5 finrl-0.0.3 int-date-0.1.8 lxml-4.6.2 pyfolio-0.9.2+75.g4b901f6 stable-baselines3-0.10.0 stockstats-0.3.2 yfinance-0.1.55
###Markdown
2.2. Check if the additional packages needed are present, if not install them. * Yahoo Finance API* pandas* numpy* matplotlib* stockstats* OpenAI gym* stable-baselines* tensorflow* pyfolio 2.3. Import Packages
###Code
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.use('Agg')
import datetime
from finrl.config import config
from finrl.marketdata.yahoodownloader import YahooDownloader
from finrl.preprocessing.preprocessors import FeatureEngineer
from finrl.preprocessing.data import data_split
from finrl.env.env_portfolio import StockPortfolioEnv
from finrl.model.models import DRLAgent
from finrl.trade.backtest import backtest_stats, backtest_plot, get_daily_return, get_baseline,convert_daily_return_to_pyfolio_ts
import sys
sys.path.append("../FinRL-Library")
###Output
/usr/local/lib/python3.7/dist-packages/pyfolio/pos.py:27: UserWarning: Module "zipline.assets" not found; multipliers will not be applied to position notionals.
'Module "zipline.assets" not found; multipliers will not be applied'
###Markdown
2.4. Create Folders
###Code
import os
if not os.path.exists("./" + config.DATA_SAVE_DIR):
os.makedirs("./" + config.DATA_SAVE_DIR)
if not os.path.exists("./" + config.TRAINED_MODEL_DIR):
os.makedirs("./" + config.TRAINED_MODEL_DIR)
if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR):
os.makedirs("./" + config.TENSORBOARD_LOG_DIR)
if not os.path.exists("./" + config.RESULTS_DIR):
os.makedirs("./" + config.RESULTS_DIR)
###Output
_____no_output_____
###Markdown
Part 3. Download DataYahoo Finance is a website that provides stock data, financial news, financial reports, etc. All the data provided by Yahoo Finance is free.* FinRL uses a class **YahooDownloader** to fetch data from Yahoo Finance API* Call Limit: Using the Public API (without authentication), you are limited to 2,000 requests per hour per IP (or up to a total of 48,000 requests a day).
###Code
print(config.DOW_30_TICKER)
df = YahooDownloader(start_date = '2008-01-01',
end_date = '2021-01-01',
ticker_list = config.DOW_30_TICKER).fetch_data()
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Part 4: Preprocess DataData preprocessing is a crucial step for training a high quality machine learning model. We need to check for missing data and do feature engineering in order to convert the data into a model-ready state.* Add technical indicators. In practical trading, various information needs to be taken into account, for example the historical stock prices, current holding shares, technical indicators, etc. In this article, we demonstrate two trend-following technical indicators: MACD and RSI.* Add turbulence index. Risk-aversion reflects whether an investor will choose to preserve the capital. It also influences one's trading strategy when facing different market volatility level. To control the risk in a worst-case scenario, such as financial crisis of 2007–2008, FinRL employs the financial turbulence index that measures extreme asset price fluctuation.
###Code
fe = FeatureEngineer(
use_technical_indicator=True,
use_turbulence=False,
user_defined_feature = False)
df = fe.preprocess_data(df)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Add covariance matrix as states
###Code
# add covariance matrix as states
df=df.sort_values(['date','tic'],ignore_index=True)
df.index = df.date.factorize()[0]
cov_list = []
# look back is one year
lookback=252
for i in range(lookback,len(df.index.unique())):
data_lookback = df.loc[i-lookback:i,:]
price_lookback=data_lookback.pivot_table(index = 'date',columns = 'tic', values = 'close')
return_lookback = price_lookback.pct_change().dropna()
covs = return_lookback.cov().values
cov_list.append(covs)
df_cov = pd.DataFrame({'date':df.date.unique()[lookback:],'cov_list':cov_list})
df = df.merge(df_cov, on='date')
df = df.sort_values(['date','tic']).reset_index(drop=True)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Part 5. Design EnvironmentConsidering the stochastic and interactive nature of the automated stock trading tasks, a financial task is modeled as a **Markov Decision Process (MDP)** problem. The training process involves observing stock price change, taking an action and reward's calculation to have the agent adjusting its strategy accordingly. By interacting with the environment, the trading agent will derive a trading strategy with the maximized rewards as time proceeds.Our trading environments, based on OpenAI Gym framework, simulate live stock markets with real market data according to the principle of time-driven simulation.The action space describes the allowed actions that the agent interacts with the environment. Normally, action a includes three actions: {-1, 0, 1}, where -1, 0, 1 represent selling, holding, and buying one share. Also, an action can be carried upon multiple shares. We use an action space {-k,…,-1, 0, 1, …, k}, where k denotes the number of shares to buy and -k denotes the number of shares to sell. For example, "Buy 10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or -10, respectively. The continuous action space needs to be normalized to [-1, 1], since the policy is defined on a Gaussian distribution, which needs to be normalized and symmetric. Training data split: 2009-01-01 to 2018-12-31
###Code
train = data_split(df, '2009-01-01','2019-01-01')
#trade = data_split(df, '2020-01-01', config.END_DATE)
train.head()
###Output
_____no_output_____
###Markdown
Environment for Portfolio Allocation
###Code
import numpy as np
import pandas as pd
from gym.utils import seeding
import gym
from gym import spaces
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from stable_baselines3.common.vec_env import DummyVecEnv
class StockPortfolioEnv(gym.Env):
"""A single stock trading environment for OpenAI gym
Attributes
----------
df: DataFrame
input data
stock_dim : int
number of unique stocks
hmax : int
maximum number of shares to trade
initial_amount : int
start money
transaction_cost_pct: float
transaction cost percentage per trade
reward_scaling: float
scaling factor for reward, good for training
state_space: int
the dimension of input features
action_space: int
equals stock dimension
tech_indicator_list: list
a list of technical indicator names
turbulence_threshold: int
a threshold to control risk aversion
day: int
an increment number to control date
Methods
-------
_sell_stock()
perform sell action based on the sign of the action
_buy_stock()
perform buy action based on the sign of the action
step()
at each step the agent will return actions, then
we will calculate the reward, and return the next observation.
reset()
reset the environment
render()
use render to return other functions
save_asset_memory()
return account value at each time step
save_action_memory()
return actions/positions at each time step
"""
metadata = {'render.modes': ['human']}
def __init__(self,
df,
stock_dim,
hmax,
initial_amount,
transaction_cost_pct,
reward_scaling,
state_space,
action_space,
tech_indicator_list,
turbulence_threshold=None,
lookback=252,
day = 0):
#super(StockEnv, self).__init__()
#money = 10 , scope = 1
self.day = day
self.lookback=lookback
self.df = df
self.stock_dim = stock_dim
self.hmax = hmax
self.initial_amount = initial_amount
self.transaction_cost_pct =transaction_cost_pct
self.reward_scaling = reward_scaling
self.state_space = state_space
self.action_space = action_space
self.tech_indicator_list = tech_indicator_list
# action_space normalization and shape is self.stock_dim
self.action_space = spaces.Box(low = 0, high = 1,shape = (self.action_space,))
# Shape = (34, 30)
# covariance matrix + technical indicators
self.observation_space = spaces.Box(low=-np.inf, high=np.inf, shape = (self.state_space+len(self.tech_indicator_list),self.state_space))
# load data from a pandas dataframe
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.terminal = False
self.turbulence_threshold = turbulence_threshold
# initalize state: inital portfolio return + individual stock return + individual weights
self.portfolio_value = self.initial_amount
# memorize portfolio value each step
self.asset_memory = [self.initial_amount]
# memorize portfolio return each step
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
def step(self, actions):
# print(self.day)
self.terminal = self.day >= len(self.df.index.unique())-1
# print(actions)
if self.terminal:
df = pd.DataFrame(self.portfolio_return_memory)
df.columns = ['daily_return']
plt.plot(df.daily_return.cumsum(),'r')
plt.savefig('results/cumulative_reward.png')
plt.close()
plt.plot(self.portfolio_return_memory,'r')
plt.savefig('results/rewards.png')
plt.close()
print("=================================")
print("begin_total_asset:{}".format(self.asset_memory[0]))
print("end_total_asset:{}".format(self.portfolio_value))
df_daily_return = pd.DataFrame(self.portfolio_return_memory)
df_daily_return.columns = ['daily_return']
if df_daily_return['daily_return'].std() !=0:
sharpe = (252**0.5)*df_daily_return['daily_return'].mean()/ \
df_daily_return['daily_return'].std()
print("Sharpe: ",sharpe)
print("=================================")
return self.state, self.reward, self.terminal,{}
else:
#print("Model actions: ",actions)
# actions are the portfolio weight
# normalize to sum of 1
#if (np.array(actions) - np.array(actions).min()).sum() != 0:
# norm_actions = (np.array(actions) - np.array(actions).min()) / (np.array(actions) - np.array(actions).min()).sum()
#else:
# norm_actions = actions
weights = self.softmax_normalization(actions)
#print("Normalized actions: ", weights)
self.actions_memory.append(weights)
last_day_memory = self.data
#load next state
self.day += 1
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
#print(self.state)
# calcualte portfolio return
# individual stocks' return * weight
portfolio_return = sum(((self.data.close.values / last_day_memory.close.values)-1)*weights)
# update portfolio value
new_portfolio_value = self.portfolio_value*(1+portfolio_return)
self.portfolio_value = new_portfolio_value
# save into memory
self.portfolio_return_memory.append(portfolio_return)
self.date_memory.append(self.data.date.unique()[0])
self.asset_memory.append(new_portfolio_value)
# the reward is the new portfolio value or end portfolo value
self.reward = new_portfolio_value
#print("Step reward: ", self.reward)
#self.reward = self.reward*self.reward_scaling
return self.state, self.reward, self.terminal, {}
def reset(self):
self.asset_memory = [self.initial_amount]
self.day = 0
self.data = self.df.loc[self.day,:]
# load states
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.portfolio_value = self.initial_amount
#self.cost = 0
#self.trades = 0
self.terminal = False
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
return self.state
def render(self, mode='human'):
return self.state
def softmax_normalization(self, actions):
numerator = np.exp(actions)
denominator = np.sum(np.exp(actions))
softmax_output = numerator/denominator
return softmax_output
def save_asset_memory(self):
date_list = self.date_memory
portfolio_return = self.portfolio_return_memory
#print(len(date_list))
#print(len(asset_list))
df_account_value = pd.DataFrame({'date':date_list,'daily_return':portfolio_return})
return df_account_value
def save_action_memory(self):
# date and close price length must match actions length
date_list = self.date_memory
df_date = pd.DataFrame(date_list)
df_date.columns = ['date']
action_list = self.actions_memory
df_actions = pd.DataFrame(action_list)
df_actions.columns = self.data.tic.values
df_actions.index = df_date.date
#df_actions = pd.DataFrame({'date':date_list,'actions':action_list})
return df_actions
def _seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def get_sb_env(self):
e = DummyVecEnv([lambda: self])
obs = e.reset()
return e, obs
stock_dimension = len(train.tic.unique())
state_space = stock_dimension
print(f"Stock Dimension: {stock_dimension}, State Space: {state_space}")
env_kwargs = {
"hmax": 100,
"initial_amount": 1000000,
"transaction_cost_pct": 0.001,
"state_space": state_space,
"stock_dim": stock_dimension,
"tech_indicator_list": config.TECHNICAL_INDICATORS_LIST,
"action_space": stock_dimension,
"reward_scaling": 1e-4
}
e_train_gym = StockPortfolioEnv(df = train, **env_kwargs)
env_train, _ = e_train_gym.get_sb_env()
print(type(env_train))
###Output
<class 'stable_baselines3.common.vec_env.dummy_vec_env.DummyVecEnv'>
###Markdown
Part 6: Implement DRL Algorithms* The implementation of the DRL algorithms are based on **OpenAI Baselines** and **Stable Baselines**. Stable Baselines is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups.* FinRL library includes fine-tuned standard DRL algorithms, such as DQN, DDPG,Multi-Agent DDPG, PPO, SAC, A2C and TD3. We also allow users todesign their own DRL algorithms by adapting these DRL algorithms.
###Code
# initialize
agent = DRLAgent(env = env_train)
###Output
_____no_output_____
###Markdown
Model 1: **A2C**
###Code
agent = DRLAgent(env = env_train)
A2C_PARAMS = {"n_steps": 5, "ent_coef": 0.005, "learning_rate": 0.0002}
model_a2c = agent.get_model(model_name="a2c",model_kwargs = A2C_PARAMS)
trained_a2c = agent.train_model(model=model_a2c,
tb_log_name='a2c',
total_timesteps=60000)
###Output
Logging to tensorboard_log/a2c/a2c_1
-------------------------------------
| time/ | |
| fps | 130 |
| iterations | 100 |
| time_elapsed | 3 |
| total_timesteps | 500 |
| train/ | |
| entropy_loss | -42.5 |
| explained_variance | -4.23e+15 |
| learning_rate | 0.0002 |
| n_updates | 99 |
| policy_loss | 1.8e+08 |
| std | 0.997 |
| value_loss | 2.48e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 157 |
| iterations | 200 |
| time_elapsed | 6 |
| total_timesteps | 1000 |
| train/ | |
| entropy_loss | -42.5 |
| explained_variance | -7.89e+14 |
| learning_rate | 0.0002 |
| n_updates | 199 |
| policy_loss | 2.44e+08 |
| std | 0.997 |
| value_loss | 4.08e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 167 |
| iterations | 300 |
| time_elapsed | 8 |
| total_timesteps | 1500 |
| train/ | |
| entropy_loss | -42.5 |
| explained_variance | -9.77e+25 |
| learning_rate | 0.0002 |
| n_updates | 299 |
| policy_loss | 4.02e+08 |
| std | 0.997 |
| value_loss | 9.82e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 179 |
| iterations | 400 |
| time_elapsed | 11 |
| total_timesteps | 2000 |
| train/ | |
| entropy_loss | -42.5 |
| explained_variance | -6.9e+16 |
| learning_rate | 0.0002 |
| n_updates | 399 |
| policy_loss | 4.57e+08 |
| std | 0.997 |
| value_loss | 1.39e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 189 |
| iterations | 500 |
| time_elapsed | 13 |
| total_timesteps | 2500 |
| train/ | |
| entropy_loss | -42.5 |
| explained_variance | -4.81e+17 |
| learning_rate | 0.0002 |
| n_updates | 499 |
| policy_loss | 6.13e+08 |
| std | 0.996 |
| value_loss | 2.53e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4550666.315740787
Sharpe: 1.0302838133559835
=================================
------------------------------------
| time/ | |
| fps | 192 |
| iterations | 600 |
| time_elapsed | 15 |
| total_timesteps | 3000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 599 |
| policy_loss | 1.96e+08 |
| std | 0.996 |
| value_loss | 2.53e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 197 |
| iterations | 700 |
| time_elapsed | 17 |
| total_timesteps | 3500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | -2.18e+17 |
| learning_rate | 0.0002 |
| n_updates | 699 |
| policy_loss | 2.37e+08 |
| std | 0.996 |
| value_loss | 4.06e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 202 |
| iterations | 800 |
| time_elapsed | 19 |
| total_timesteps | 4000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 799 |
| policy_loss | 3.7e+08 |
| std | 0.995 |
| value_loss | 1.01e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 206 |
| iterations | 900 |
| time_elapsed | 21 |
| total_timesteps | 4500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 899 |
| policy_loss | 4.31e+08 |
| std | 0.995 |
| value_loss | 1.29e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 208 |
| iterations | 1000 |
| time_elapsed | 23 |
| total_timesteps | 5000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | -1.18e+18 |
| learning_rate | 0.0002 |
| n_updates | 999 |
| policy_loss | 6.01e+08 |
| std | 0.995 |
| value_loss | 2.52e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4538927.251756459
Sharpe: 1.0239597239761906
=================================
------------------------------------
| time/ | |
| fps | 209 |
| iterations | 1100 |
| time_elapsed | 26 |
| total_timesteps | 5500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 1099 |
| policy_loss | 2.02e+08 |
| std | 0.995 |
| value_loss | 2.44e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 211 |
| iterations | 1200 |
| time_elapsed | 28 |
| total_timesteps | 6000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | -3.58e+18 |
| learning_rate | 0.0002 |
| n_updates | 1199 |
| policy_loss | 2.77e+08 |
| std | 0.995 |
| value_loss | 4.09e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 1300 |
| time_elapsed | 30 |
| total_timesteps | 6500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 1299 |
| policy_loss | 3.35e+08 |
| std | 0.994 |
| value_loss | 8.06e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 215 |
| iterations | 1400 |
| time_elapsed | 32 |
| total_timesteps | 7000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | -1.69e+20 |
| learning_rate | 0.0002 |
| n_updates | 1399 |
| policy_loss | 4.1e+08 |
| std | 0.994 |
| value_loss | 1.2e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 217 |
| iterations | 1500 |
| time_elapsed | 34 |
| total_timesteps | 7500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 1499 |
| policy_loss | 5.74e+08 |
| std | 0.994 |
| value_loss | 2.47e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4569623.286530429
Sharpe: 1.0309827263626288
=================================
-------------------------------------
| time/ | |
| fps | 217 |
| iterations | 1600 |
| time_elapsed | 36 |
| total_timesteps | 8000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | -1.11e+24 |
| learning_rate | 0.0002 |
| n_updates | 1599 |
| policy_loss | 1.81e+08 |
| std | 0.994 |
| value_loss | 2.28e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 218 |
| iterations | 1700 |
| time_elapsed | 38 |
| total_timesteps | 8500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 1699 |
| policy_loss | 2.6e+08 |
| std | 0.993 |
| value_loss | 4.54e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 216 |
| iterations | 1800 |
| time_elapsed | 41 |
| total_timesteps | 9000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 1799 |
| policy_loss | 3.57e+08 |
| std | 0.993 |
| value_loss | 9.62e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 216 |
| iterations | 1900 |
| time_elapsed | 43 |
| total_timesteps | 9500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | -6.95e+20 |
| learning_rate | 0.0002 |
| n_updates | 1899 |
| policy_loss | 4.08e+08 |
| std | 0.992 |
| value_loss | 1.33e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 216 |
| iterations | 2000 |
| time_elapsed | 46 |
| total_timesteps | 10000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 1999 |
| policy_loss | 7.22e+08 |
| std | 0.991 |
| value_loss | 3.02e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4784563.101868668
Sharpe: 1.0546332869946304
=================================
------------------------------------
| time/ | |
| fps | 216 |
| iterations | 2100 |
| time_elapsed | 48 |
| total_timesteps | 10500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2099 |
| policy_loss | 1.64e+08 |
| std | 0.991 |
| value_loss | 2.02e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 217 |
| iterations | 2200 |
| time_elapsed | 50 |
| total_timesteps | 11000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2199 |
| policy_loss | 2.31e+08 |
| std | 0.99 |
| value_loss | 3.61e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 218 |
| iterations | 2300 |
| time_elapsed | 52 |
| total_timesteps | 11500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2299 |
| policy_loss | 3.07e+08 |
| std | 0.99 |
| value_loss | 7.81e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 219 |
| iterations | 2400 |
| time_elapsed | 54 |
| total_timesteps | 12000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2399 |
| policy_loss | 4.03e+08 |
| std | 0.99 |
| value_loss | 1.05e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 220 |
| iterations | 2500 |
| time_elapsed | 56 |
| total_timesteps | 12500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2499 |
| policy_loss | 5.57e+08 |
| std | 0.99 |
| value_loss | 2.27e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4265807.380536508
Sharpe: 0.9867782700137868
=================================
-------------------------------------
| time/ | |
| fps | 219 |
| iterations | 2600 |
| time_elapsed | 59 |
| total_timesteps | 13000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | -3.35e+20 |
| learning_rate | 0.0002 |
| n_updates | 2599 |
| policy_loss | 1.62e+08 |
| std | 0.989 |
| value_loss | 1.89e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 220 |
| iterations | 2700 |
| time_elapsed | 61 |
| total_timesteps | 13500 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2699 |
| policy_loss | 2.56e+08 |
| std | 0.989 |
| value_loss | 4.37e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 221 |
| iterations | 2800 |
| time_elapsed | 63 |
| total_timesteps | 14000 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2799 |
| policy_loss | 3.57e+08 |
| std | 0.989 |
| value_loss | 9.53e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 221 |
| iterations | 2900 |
| time_elapsed | 65 |
| total_timesteps | 14500 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2899 |
| policy_loss | 4.31e+08 |
| std | 0.988 |
| value_loss | 1.42e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 222 |
| iterations | 3000 |
| time_elapsed | 67 |
| total_timesteps | 15000 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2999 |
| policy_loss | 6.16e+08 |
| std | 0.988 |
| value_loss | 2.68e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4737187.266470802
Sharpe: 1.048554781654813
=================================
------------------------------------
| time/ | |
| fps | 222 |
| iterations | 3100 |
| time_elapsed | 69 |
| total_timesteps | 15500 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3099 |
| policy_loss | 1.57e+08 |
| std | 0.988 |
| value_loss | 1.96e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 222 |
| iterations | 3200 |
| time_elapsed | 71 |
| total_timesteps | 16000 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3199 |
| policy_loss | 2.45e+08 |
| std | 0.988 |
| value_loss | 3.58e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 223 |
| iterations | 3300 |
| time_elapsed | 73 |
| total_timesteps | 16500 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3299 |
| policy_loss | 3.71e+08 |
| std | 0.987 |
| value_loss | 8.38e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 223 |
| iterations | 3400 |
| time_elapsed | 75 |
| total_timesteps | 17000 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3399 |
| policy_loss | 3.89e+08 |
| std | 0.987 |
| value_loss | 1.19e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 223 |
| iterations | 3500 |
| time_elapsed | 78 |
| total_timesteps | 17500 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3499 |
| policy_loss | 5.47e+08 |
| std | 0.987 |
| value_loss | 2.32e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4594345.465329124
Sharpe: 1.0338662249918555
=================================
-------------------------------------
| time/ | |
| fps | 223 |
| iterations | 3600 |
| time_elapsed | 80 |
| total_timesteps | 18000 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | -2.39e+23 |
| learning_rate | 0.0002 |
| n_updates | 3599 |
| policy_loss | 1.56e+08 |
| std | 0.987 |
| value_loss | 1.98e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 3700 |
| time_elapsed | 82 |
| total_timesteps | 18500 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3699 |
| policy_loss | 2.45e+08 |
| std | 0.986 |
| value_loss | 3.78e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 224 |
| iterations | 3800 |
| time_elapsed | 84 |
| total_timesteps | 19000 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | -1.11e+24 |
| learning_rate | 0.0002 |
| n_updates | 3799 |
| policy_loss | 3.75e+08 |
| std | 0.986 |
| value_loss | 9.09e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 3900 |
| time_elapsed | 86 |
| total_timesteps | 19500 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3899 |
| policy_loss | 4.23e+08 |
| std | 0.986 |
| value_loss | 1.09e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 225 |
| iterations | 4000 |
| time_elapsed | 88 |
| total_timesteps | 20000 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3999 |
| policy_loss | 5.46e+08 |
| std | 0.985 |
| value_loss | 2.21e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4537629.671792137
Sharpe: 1.027306122996326
=================================
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 4100 |
| time_elapsed | 91 |
| total_timesteps | 20500 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4099 |
| policy_loss | 1.76e+08 |
| std | 0.985 |
| value_loss | 1.96e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 225 |
| iterations | 4200 |
| time_elapsed | 93 |
| total_timesteps | 21000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | -4.27e+23 |
| learning_rate | 0.0002 |
| n_updates | 4199 |
| policy_loss | 2.17e+08 |
| std | 0.983 |
| value_loss | 3.5e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 225 |
| iterations | 4300 |
| time_elapsed | 95 |
| total_timesteps | 21500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | -9.61e+23 |
| learning_rate | 0.0002 |
| n_updates | 4299 |
| policy_loss | 3.36e+08 |
| std | 0.982 |
| value_loss | 7.88e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 225 |
| iterations | 4400 |
| time_elapsed | 97 |
| total_timesteps | 22000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4399 |
| policy_loss | 3.9e+08 |
| std | 0.982 |
| value_loss | 1.09e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 226 |
| iterations | 4500 |
| time_elapsed | 99 |
| total_timesteps | 22500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4499 |
| policy_loss | 5.96e+08 |
| std | 0.982 |
| value_loss | 2.24e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4641050.148925118
Sharpe: 1.035206741352005
=================================
------------------------------------
| time/ | |
| fps | 226 |
| iterations | 4600 |
| time_elapsed | 101 |
| total_timesteps | 23000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4599 |
| policy_loss | 1.86e+08 |
| std | 0.981 |
| value_loss | 2.04e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 226 |
| iterations | 4700 |
| time_elapsed | 103 |
| total_timesteps | 23500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4699 |
| policy_loss | 2.4e+08 |
| std | 0.981 |
| value_loss | 4.09e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 226 |
| iterations | 4800 |
| time_elapsed | 105 |
| total_timesteps | 24000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4799 |
| policy_loss | 3.69e+08 |
| std | 0.981 |
| value_loss | 9.69e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 226 |
| iterations | 4900 |
| time_elapsed | 108 |
| total_timesteps | 24500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | -5.9e+21 |
| learning_rate | 0.0002 |
| n_updates | 4899 |
| policy_loss | 4.46e+08 |
| std | 0.98 |
| value_loss | 1.36e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 226 |
| iterations | 5000 |
| time_elapsed | 110 |
| total_timesteps | 25000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4999 |
| policy_loss | 6.05e+08 |
| std | 0.98 |
| value_loss | 2.56e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5080677.099515816
Sharpe: 1.0970818985375046
=================================
------------------------------------
| time/ | |
| fps | 225 |
| iterations | 5100 |
| time_elapsed | 113 |
| total_timesteps | 25500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5099 |
| policy_loss | 1.7e+08 |
| std | 0.98 |
| value_loss | 2.24e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5200 |
| time_elapsed | 115 |
| total_timesteps | 26000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5199 |
| policy_loss | 2.39e+08 |
| std | 0.98 |
| value_loss | 3.92e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5300 |
| time_elapsed | 117 |
| total_timesteps | 26500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5299 |
| policy_loss | 3.24e+08 |
| std | 0.98 |
| value_loss | 8.04e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5400 |
| time_elapsed | 120 |
| total_timesteps | 27000 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | -4.8e+21 |
| learning_rate | 0.0002 |
| n_updates | 5399 |
| policy_loss | 4.29e+08 |
| std | 0.979 |
| value_loss | 1.22e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5500 |
| time_elapsed | 122 |
| total_timesteps | 27500 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5499 |
| policy_loss | 5.4e+08 |
| std | 0.979 |
| value_loss | 2.31e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4811657.503165074
Sharpe: 1.0589276474603557
=================================
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5600 |
| time_elapsed | 124 |
| total_timesteps | 28000 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5599 |
| policy_loss | 1.71e+08 |
| std | 0.978 |
| value_loss | 2.12e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5700 |
| time_elapsed | 126 |
| total_timesteps | 28500 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5699 |
| policy_loss | 2.15e+08 |
| std | 0.978 |
| value_loss | 3.76e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5800 |
| time_elapsed | 129 |
| total_timesteps | 29000 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5799 |
| policy_loss | 3.25e+08 |
| std | 0.978 |
| value_loss | 7.21e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5900 |
| time_elapsed | 131 |
| total_timesteps | 29500 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5899 |
| policy_loss | 3.48e+08 |
| std | 0.977 |
| value_loss | 9.82e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 225 |
| iterations | 6000 |
| time_elapsed | 133 |
| total_timesteps | 30000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5999 |
| policy_loss | 5.64e+08 |
| std | 0.976 |
| value_loss | 2.13e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4485060.270775738
Sharpe: 1.01141473877631
=================================
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6100 |
| time_elapsed | 135 |
| total_timesteps | 30500 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6099 |
| policy_loss | 1.76e+08 |
| std | 0.976 |
| value_loss | 2.21e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6200 |
| time_elapsed | 137 |
| total_timesteps | 31000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6199 |
| policy_loss | 2.37e+08 |
| std | 0.976 |
| value_loss | 3.86e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6300 |
| time_elapsed | 140 |
| total_timesteps | 31500 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6299 |
| policy_loss | 3.28e+08 |
| std | 0.975 |
| value_loss | 7.7e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6400 |
| time_elapsed | 142 |
| total_timesteps | 32000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6399 |
| policy_loss | 4.03e+08 |
| std | 0.975 |
| value_loss | 1.03e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6500 |
| time_elapsed | 144 |
| total_timesteps | 32500 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6499 |
| policy_loss | 5.93e+08 |
| std | 0.975 |
| value_loss | 2.38e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4716704.9549536165
Sharpe: 1.0510500905659037
=================================
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6600 |
| time_elapsed | 147 |
| total_timesteps | 33000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6599 |
| policy_loss | 1.78e+08 |
| std | 0.975 |
| value_loss | 2.04e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6700 |
| time_elapsed | 149 |
| total_timesteps | 33500 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6699 |
| policy_loss | 2.4e+08 |
| std | 0.974 |
| value_loss | 3.85e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6800 |
| time_elapsed | 151 |
| total_timesteps | 34000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | -1.16e+24 |
| learning_rate | 0.0002 |
| n_updates | 6799 |
| policy_loss | 3.2e+08 |
| std | 0.974 |
| value_loss | 7.66e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6900 |
| time_elapsed | 153 |
| total_timesteps | 34500 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6899 |
| policy_loss | 3.45e+08 |
| std | 0.973 |
| value_loss | 9.59e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 7000 |
| time_elapsed | 155 |
| total_timesteps | 35000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6999 |
| policy_loss | 6.22e+08 |
| std | 0.973 |
| value_loss | 2.58e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4722061.646242311
Sharpe: 1.0529486633467167
=================================
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 7100 |
| time_elapsed | 158 |
| total_timesteps | 35500 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7099 |
| policy_loss | 1.63e+08 |
| std | 0.973 |
| value_loss | 1.91e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 7200 |
| time_elapsed | 160 |
| total_timesteps | 36000 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7199 |
| policy_loss | 2.26e+08 |
| std | 0.973 |
| value_loss | 3.43e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 7300 |
| time_elapsed | 162 |
| total_timesteps | 36500 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7299 |
| policy_loss | 3.31e+08 |
| std | 0.972 |
| value_loss | 7.69e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 223 |
| iterations | 7400 |
| time_elapsed | 165 |
| total_timesteps | 37000 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7399 |
| policy_loss | 3.65e+08 |
| std | 0.971 |
| value_loss | 9.37e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 222 |
| iterations | 7500 |
| time_elapsed | 168 |
| total_timesteps | 37500 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7499 |
| policy_loss | 5.72e+08 |
| std | 0.971 |
| value_loss | 2.37e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4651172.332054012
Sharpe: 1.0366825368944979
=================================
------------------------------------
| time/ | |
| fps | 221 |
| iterations | 7600 |
| time_elapsed | 171 |
| total_timesteps | 38000 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7599 |
| policy_loss | 1.71e+08 |
| std | 0.971 |
| value_loss | 2e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 220 |
| iterations | 7700 |
| time_elapsed | 174 |
| total_timesteps | 38500 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7699 |
| policy_loss | 2e+08 |
| std | 0.97 |
| value_loss | 3.27e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 219 |
| iterations | 7800 |
| time_elapsed | 177 |
| total_timesteps | 39000 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | -2.5e+23 |
| learning_rate | 0.0002 |
| n_updates | 7799 |
| policy_loss | 3.23e+08 |
| std | 0.969 |
| value_loss | 8.21e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 218 |
| iterations | 7900 |
| time_elapsed | 181 |
| total_timesteps | 39500 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | -3.76e+23 |
| learning_rate | 0.0002 |
| n_updates | 7899 |
| policy_loss | 4.25e+08 |
| std | 0.969 |
| value_loss | 1.23e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 216 |
| iterations | 8000 |
| time_elapsed | 184 |
| total_timesteps | 40000 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7999 |
| policy_loss | 5.93e+08 |
| std | 0.969 |
| value_loss | 2.54e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5004208.576042484
Sharpe: 1.0844189746438444
=================================
------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8100 |
| time_elapsed | 187 |
| total_timesteps | 40500 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 8099 |
| policy_loss | 1.66e+08 |
| std | 0.969 |
| value_loss | 2e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8200 |
| time_elapsed | 189 |
| total_timesteps | 41000 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | -9.41e+22 |
| learning_rate | 0.0002 |
| n_updates | 8199 |
| policy_loss | 2.17e+08 |
| std | 0.969 |
| value_loss | 3.1e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8300 |
| time_elapsed | 192 |
| total_timesteps | 41500 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | -2.31e+23 |
| learning_rate | 0.0002 |
| n_updates | 8299 |
| policy_loss | 3.37e+08 |
| std | 0.968 |
| value_loss | 7.5e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8400 |
| time_elapsed | 194 |
| total_timesteps | 42000 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 8399 |
| policy_loss | 3.99e+08 |
| std | 0.967 |
| value_loss | 1.15e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8500 |
| time_elapsed | 197 |
| total_timesteps | 42500 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 8499 |
| policy_loss | 5.83e+08 |
| std | 0.967 |
| value_loss | 2.03e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4690651.093610478
Sharpe: 1.0439707122222264
=================================
-------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8600 |
| time_elapsed | 199 |
| total_timesteps | 43000 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | -1.44e+21 |
| learning_rate | 0.0002 |
| n_updates | 8599 |
| policy_loss | 1.58e+08 |
| std | 0.967 |
| value_loss | 1.95e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8700 |
| time_elapsed | 202 |
| total_timesteps | 43500 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 8699 |
| policy_loss | 2.11e+08 |
| std | 0.966 |
| value_loss | 3.08e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8800 |
| time_elapsed | 204 |
| total_timesteps | 44000 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 8799 |
| policy_loss | 3.28e+08 |
| std | 0.965 |
| value_loss | 7.03e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 214 |
| iterations | 8900 |
| time_elapsed | 207 |
| total_timesteps | 44500 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | -3.36e+23 |
| learning_rate | 0.0002 |
| n_updates | 8899 |
| policy_loss | 4.06e+08 |
| std | 0.965 |
| value_loss | 1.1e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9000 |
| time_elapsed | 210 |
| total_timesteps | 45000 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 8999 |
| policy_loss | 5.2e+08 |
| std | 0.964 |
| value_loss | 1.98e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4660061.433540329
Sharpe: 1.04048695684595
=================================
-------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9100 |
| time_elapsed | 213 |
| total_timesteps | 45500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | -1.77e+21 |
| learning_rate | 0.0002 |
| n_updates | 9099 |
| policy_loss | 1.62e+08 |
| std | 0.964 |
| value_loss | 1.83e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9200 |
| time_elapsed | 215 |
| total_timesteps | 46000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9199 |
| policy_loss | 2.01e+08 |
| std | 0.964 |
| value_loss | 2.87e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9300 |
| time_elapsed | 217 |
| total_timesteps | 46500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | -2.13e+23 |
| learning_rate | 0.0002 |
| n_updates | 9299 |
| policy_loss | 3.31e+08 |
| std | 0.963 |
| value_loss | 7e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9400 |
| time_elapsed | 220 |
| total_timesteps | 47000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9399 |
| policy_loss | 4.06e+08 |
| std | 0.963 |
| value_loss | 1.1e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9500 |
| time_elapsed | 222 |
| total_timesteps | 47500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9499 |
| policy_loss | 5.33e+08 |
| std | 0.962 |
| value_loss | 2.11e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4841177.689704771
Sharpe: 1.0662304642107994
=================================
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9600 |
| time_elapsed | 224 |
| total_timesteps | 48000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9599 |
| policy_loss | 1.42e+08 |
| std | 0.962 |
| value_loss | 1.54e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9700 |
| time_elapsed | 226 |
| total_timesteps | 48500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9699 |
| policy_loss | 1.72e+08 |
| std | 0.961 |
| value_loss | 2.54e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9800 |
| time_elapsed | 229 |
| total_timesteps | 49000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9799 |
| policy_loss | 3.05e+08 |
| std | 0.961 |
| value_loss | 6.27e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9900 |
| time_elapsed | 232 |
| total_timesteps | 49500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9899 |
| policy_loss | 3.52e+08 |
| std | 0.962 |
| value_loss | 9.87e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10000 |
| time_elapsed | 234 |
| total_timesteps | 50000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9999 |
| policy_loss | 4.99e+08 |
| std | 0.962 |
| value_loss | 1.98e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4829593.807900699
Sharpe: 1.0662441117803074
=================================
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10100 |
| time_elapsed | 237 |
| total_timesteps | 50500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10099 |
| policy_loss | 1.41e+08 |
| std | 0.962 |
| value_loss | 1.59e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10200 |
| time_elapsed | 239 |
| total_timesteps | 51000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10199 |
| policy_loss | 1.88e+08 |
| std | 0.961 |
| value_loss | 2.59e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10300 |
| time_elapsed | 242 |
| total_timesteps | 51500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10299 |
| policy_loss | 3.11e+08 |
| std | 0.961 |
| value_loss | 5.9e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10400 |
| time_elapsed | 244 |
| total_timesteps | 52000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10399 |
| policy_loss | 3.57e+08 |
| std | 0.961 |
| value_loss | 9.64e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10500 |
| time_elapsed | 246 |
| total_timesteps | 52500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10499 |
| policy_loss | 4.69e+08 |
| std | 0.961 |
| value_loss | 1.89e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4867642.492651795
Sharpe: 1.0695800575241914
=================================
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10600 |
| time_elapsed | 249 |
| total_timesteps | 53000 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10599 |
| policy_loss | 1.44e+08 |
| std | 0.96 |
| value_loss | 1.48e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10700 |
| time_elapsed | 251 |
| total_timesteps | 53500 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10699 |
| policy_loss | 1.9e+08 |
| std | 0.96 |
| value_loss | 2.62e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10800 |
| time_elapsed | 253 |
| total_timesteps | 54000 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10799 |
| policy_loss | 3.1e+08 |
| std | 0.959 |
| value_loss | 6.5e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10900 |
| time_elapsed | 256 |
| total_timesteps | 54500 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10899 |
| policy_loss | 3.56e+08 |
| std | 0.959 |
| value_loss | 1.09e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 11000 |
| time_elapsed | 258 |
| total_timesteps | 55000 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10999 |
| policy_loss | 4.86e+08 |
| std | 0.958 |
| value_loss | 1.8e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4722117.849533835
Sharpe: 1.0511916286251552
=================================
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 11100 |
| time_elapsed | 261 |
| total_timesteps | 55500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11099 |
| policy_loss | 1.37e+08 |
| std | 0.957 |
| value_loss | 1.42e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 11200 |
| time_elapsed | 263 |
| total_timesteps | 56000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11199 |
| policy_loss | 2.17e+08 |
| std | 0.956 |
| value_loss | 3.5e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 11300 |
| time_elapsed | 265 |
| total_timesteps | 56500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11299 |
| policy_loss | 3.17e+08 |
| std | 0.957 |
| value_loss | 7.01e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 11400 |
| time_elapsed | 268 |
| total_timesteps | 57000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11399 |
| policy_loss | 3.67e+08 |
| std | 0.956 |
| value_loss | 1.15e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 211 |
| iterations | 11500 |
| time_elapsed | 271 |
| total_timesteps | 57500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11499 |
| policy_loss | 5.1e+08 |
| std | 0.956 |
| value_loss | 1.78e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4803878.457147342
Sharpe: 1.0585455233591723
=================================
------------------------------------
| time/ | |
| fps | 211 |
| iterations | 11600 |
| time_elapsed | 274 |
| total_timesteps | 58000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11599 |
| policy_loss | 1.22e+08 |
| std | 0.956 |
| value_loss | 1.16e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 211 |
| iterations | 11700 |
| time_elapsed | 276 |
| total_timesteps | 58500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11699 |
| policy_loss | 2.17e+08 |
| std | 0.956 |
| value_loss | 3.15e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 211 |
| iterations | 11800 |
| time_elapsed | 279 |
| total_timesteps | 59000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11799 |
| policy_loss | 3.13e+08 |
| std | 0.956 |
| value_loss | 6.62e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 211 |
| iterations | 11900 |
| time_elapsed | 281 |
| total_timesteps | 59500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11899 |
| policy_loss | 4.11e+08 |
| std | 0.956 |
| value_loss | 1.2e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 211 |
| iterations | 12000 |
| time_elapsed | 283 |
| total_timesteps | 60000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11999 |
| policy_loss | 5.16e+08 |
| std | 0.956 |
| value_loss | 1.93e+14 |
------------------------------------
###Markdown
Model 2: **PPO**
###Code
agent = DRLAgent(env = env_train)
PPO_PARAMS = {
"n_steps": 2048,
"ent_coef": 0.005,
"learning_rate": 0.0001,
"batch_size": 128,
}
model_ppo = agent.get_model("ppo",model_kwargs = PPO_PARAMS)
trained_ppo = agent.train_model(model=model_ppo,
tb_log_name='ppo',
total_timesteps=80000)
###Output
Logging to tensorboard_log/ppo/ppo_3
-----------------------------
| time/ | |
| fps | 458 |
| iterations | 1 |
| time_elapsed | 4 |
| total_timesteps | 2048 |
-----------------------------
=================================
begin_total_asset:1000000
end_total_asset:4917364.6278486075
Sharpe: 1.074414829116363
=================================
--------------------------------------------
| time/ | |
| fps | 391 |
| iterations | 2 |
| time_elapsed | 10 |
| total_timesteps | 4096 |
| train/ | |
| approx_kl | -7.8231096e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.71e+14 |
| learning_rate | 0.0001 |
| loss | 7.78e+14 |
| n_updates | 10 |
| policy_gradient_loss | -6.16e-07 |
| std | 1 |
| value_loss | 1.57e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4996331.100586685
Sharpe: 1.0890927964884638
=================================
--------------------------------------------
| time/ | |
| fps | 373 |
| iterations | 3 |
| time_elapsed | 16 |
| total_timesteps | 6144 |
| train/ | |
| approx_kl | -3.5390258e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -8.76e+14 |
| learning_rate | 0.0001 |
| loss | 1.1e+15 |
| n_updates | 20 |
| policy_gradient_loss | -4.29e-07 |
| std | 1 |
| value_loss | 2.33e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4751039.2878817525
Sharpe: 1.0560179406423764
=================================
--------------------------------------------
| time/ | |
| fps | 365 |
| iterations | 4 |
| time_elapsed | 22 |
| total_timesteps | 8192 |
| train/ | |
| approx_kl | -1.6763806e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -8.01e+15 |
| learning_rate | 0.0001 |
| loss | 1.25e+15 |
| n_updates | 30 |
| policy_gradient_loss | -5.58e-07 |
| std | 1 |
| value_loss | 2.59e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4769059.347696523
Sharpe: 1.056814654380227
=================================
--------------------------------------------
| time/ | |
| fps | 360 |
| iterations | 5 |
| time_elapsed | 28 |
| total_timesteps | 10240 |
| train/ | |
| approx_kl | -5.5879354e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -2.55e+16 |
| learning_rate | 0.0001 |
| loss | 1.24e+15 |
| n_updates | 40 |
| policy_gradient_loss | -4.9e-07 |
| std | 1 |
| value_loss | 2.7e+15 |
--------------------------------------------
--------------------------------------------
| time/ | |
| fps | 358 |
| iterations | 6 |
| time_elapsed | 34 |
| total_timesteps | 12288 |
| train/ | |
| approx_kl | 1.13621354e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -9.17e+16 |
| learning_rate | 0.0001 |
| loss | 1.35e+15 |
| n_updates | 50 |
| policy_gradient_loss | -4.28e-07 |
| std | 1 |
| value_loss | 2.77e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4816491.86007194
Sharpe: 1.0636199939613733
=================================
-------------------------------------------
| time/ | |
| fps | 356 |
| iterations | 7 |
| time_elapsed | 40 |
| total_timesteps | 14336 |
| train/ | |
| approx_kl | 3.5390258e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.42e+17 |
| learning_rate | 0.0001 |
| loss | 1.03e+15 |
| n_updates | 60 |
| policy_gradient_loss | -6.52e-07 |
| std | 1 |
| value_loss | 1.94e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4631919.83090099
Sharpe: 1.0396504731290799
=================================
-------------------------------------------
| time/ | |
| fps | 354 |
| iterations | 8 |
| time_elapsed | 46 |
| total_timesteps | 16384 |
| train/ | |
| approx_kl | 1.7508864e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.93e+17 |
| learning_rate | 0.0001 |
| loss | 9.83e+14 |
| n_updates | 70 |
| policy_gradient_loss | -5.78e-07 |
| std | 1 |
| value_loss | 2.06e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4728763.286321457
Sharpe: 1.052390302374202
=================================
-------------------------------------------
| time/ | |
| fps | 353 |
| iterations | 9 |
| time_elapsed | 52 |
| total_timesteps | 18432 |
| train/ | |
| approx_kl | 4.4703484e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.72e+18 |
| learning_rate | 0.0001 |
| loss | 1.25e+15 |
| n_updates | 80 |
| policy_gradient_loss | -4.84e-07 |
| std | 1 |
| value_loss | 2.33e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4439983.024798136
Sharpe: 1.013829383303325
=================================
--------------------------------------------
| time/ | |
| fps | 352 |
| iterations | 10 |
| time_elapsed | 58 |
| total_timesteps | 20480 |
| train/ | |
| approx_kl | -1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.7e+18 |
| learning_rate | 0.0001 |
| loss | 1.17e+15 |
| n_updates | 90 |
| policy_gradient_loss | -4.82e-07 |
| std | 1 |
| value_loss | 2.58e+15 |
--------------------------------------------
-------------------------------------------
| time/ | |
| fps | 352 |
| iterations | 11 |
| time_elapsed | 63 |
| total_timesteps | 22528 |
| train/ | |
| approx_kl | -9.313226e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.85e+18 |
| learning_rate | 0.0001 |
| loss | 1.2e+15 |
| n_updates | 100 |
| policy_gradient_loss | -5.2e-07 |
| std | 1 |
| value_loss | 2.51e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5048884.524536961
Sharpe: 1.0963911876706685
=================================
-------------------------------------------
| time/ | |
| fps | 351 |
| iterations | 12 |
| time_elapsed | 69 |
| total_timesteps | 24576 |
| train/ | |
| approx_kl | 3.7252903e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -2.67e+18 |
| learning_rate | 0.0001 |
| loss | 1.44e+15 |
| n_updates | 110 |
| policy_gradient_loss | -4.53e-07 |
| std | 1 |
| value_loss | 2.8e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4824229.456193555
Sharpe: 1.0648549464252506
=================================
-------------------------------------------
| time/ | |
| fps | 351 |
| iterations | 13 |
| time_elapsed | 75 |
| total_timesteps | 26624 |
| train/ | |
| approx_kl | 3.3527613e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.38e+18 |
| learning_rate | 0.0001 |
| loss | 7.89e+14 |
| n_updates | 120 |
| policy_gradient_loss | -6.06e-07 |
| std | 1 |
| value_loss | 1.76e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4602974.615591427
Sharpe: 1.034753433280377
=================================
-------------------------------------------
| time/ | |
| fps | 350 |
| iterations | 14 |
| time_elapsed | 81 |
| total_timesteps | 28672 |
| train/ | |
| approx_kl | 8.8475645e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.75e+19 |
| learning_rate | 0.0001 |
| loss | 1.23e+15 |
| n_updates | 130 |
| policy_gradient_loss | -5.8e-07 |
| std | 1 |
| value_loss | 2.27e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4608422.583401322
Sharpe: 1.035300880612428
=================================
-------------------------------------------
| time/ | |
| fps | 349 |
| iterations | 15 |
| time_elapsed | 87 |
| total_timesteps | 30720 |
| train/ | |
| approx_kl | 1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.71e+18 |
| learning_rate | 0.0001 |
| loss | 1.22e+15 |
| n_updates | 140 |
| policy_gradient_loss | -5.63e-07 |
| std | 1 |
| value_loss | 2.39e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4826869.636472441
Sharpe: 1.0676330284861433
=================================
--------------------------------------------
| time/ | |
| fps | 348 |
| iterations | 16 |
| time_elapsed | 94 |
| total_timesteps | 32768 |
| train/ | |
| approx_kl | -1.4901161e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.51e+19 |
| learning_rate | 0.0001 |
| loss | 1.22e+15 |
| n_updates | 150 |
| policy_gradient_loss | -5.78e-07 |
| std | 1 |
| value_loss | 2.7e+15 |
--------------------------------------------
-------------------------------------------
| time/ | |
| fps | 346 |
| iterations | 17 |
| time_elapsed | 100 |
| total_timesteps | 34816 |
| train/ | |
| approx_kl | -5.401671e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.48e+19 |
| learning_rate | 0.0001 |
| loss | 1.48e+15 |
| n_updates | 160 |
| policy_gradient_loss | -3.96e-07 |
| std | 1 |
| value_loss | 2.81e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4364006.929301854
Sharpe: 1.002176631256902
=================================
--------------------------------------------
| time/ | |
| fps | 345 |
| iterations | 18 |
| time_elapsed | 106 |
| total_timesteps | 36864 |
| train/ | |
| approx_kl | -1.0803342e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.15e+19 |
| learning_rate | 0.0001 |
| loss | 8.41e+14 |
| n_updates | 170 |
| policy_gradient_loss | -4.91e-07 |
| std | 1 |
| value_loss | 1.58e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4796634.5596691
Sharpe: 1.0678319491053092
=================================
--------------------------------------------
| time/ | |
| fps | 344 |
| iterations | 19 |
| time_elapsed | 112 |
| total_timesteps | 38912 |
| train/ | |
| approx_kl | -1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -4.21e+19 |
| learning_rate | 0.0001 |
| loss | 1.03e+15 |
| n_updates | 180 |
| policy_gradient_loss | -5.6e-07 |
| std | 1 |
| value_loss | 2.02e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4969786.413399254
Sharpe: 1.0823021486710163
=================================
--------------------------------------------
| time/ | |
| fps | 344 |
| iterations | 20 |
| time_elapsed | 118 |
| total_timesteps | 40960 |
| train/ | |
| approx_kl | -6.7055225e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.41e+19 |
| learning_rate | 0.0001 |
| loss | 1.22e+15 |
| n_updates | 190 |
| policy_gradient_loss | -2.87e-07 |
| std | 1 |
| value_loss | 2.4e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4885480.801922398
Sharpe: 1.0729451877791811
=================================
--------------------------------------------
| time/ | |
| fps | 343 |
| iterations | 21 |
| time_elapsed | 125 |
| total_timesteps | 43008 |
| train/ | |
| approx_kl | -5.5879354e-09 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.85e+19 |
| learning_rate | 0.0001 |
| loss | 1.62e+15 |
| n_updates | 200 |
| policy_gradient_loss | -5.24e-07 |
| std | 1 |
| value_loss | 2.95e+15 |
--------------------------------------------
-------------------------------------------
| time/ | |
| fps | 343 |
| iterations | 22 |
| time_elapsed | 131 |
| total_timesteps | 45056 |
| train/ | |
| approx_kl | 1.8067658e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.01e+19 |
| learning_rate | 0.0001 |
| loss | 1.34e+15 |
| n_updates | 210 |
| policy_gradient_loss | -4.62e-07 |
| std | 1 |
| value_loss | 2.93e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5613709.009268909
Sharpe: 1.1673870008513114
=================================
--------------------------------------------
| time/ | |
| fps | 342 |
| iterations | 23 |
| time_elapsed | 137 |
| total_timesteps | 47104 |
| train/ | |
| approx_kl | -2.0489097e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.72e+19 |
| learning_rate | 0.0001 |
| loss | 1.41e+15 |
| n_updates | 220 |
| policy_gradient_loss | -4.78e-07 |
| std | 1 |
| value_loss | 2.71e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5043800.590470289
Sharpe: 1.0953673306850924
=================================
-------------------------------------------
| time/ | |
| fps | 342 |
| iterations | 24 |
| time_elapsed | 143 |
| total_timesteps | 49152 |
| train/ | |
| approx_kl | 2.4214387e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.37e+20 |
| learning_rate | 0.0001 |
| loss | 1.01e+15 |
| n_updates | 230 |
| policy_gradient_loss | -5.28e-07 |
| std | 1 |
| value_loss | 2.26e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4776576.852863929
Sharpe: 1.0593811754233755
=================================
-------------------------------------------
| time/ | |
| fps | 342 |
| iterations | 25 |
| time_elapsed | 149 |
| total_timesteps | 51200 |
| train/ | |
| approx_kl | 4.4703484e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.27e+20 |
| learning_rate | 0.0001 |
| loss | 1.21e+15 |
| n_updates | 240 |
| policy_gradient_loss | -4.82e-07 |
| std | 1 |
| value_loss | 2.46e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4468393.200157898
Sharpe: 1.0192746589767419
=================================
-------------------------------------------
| time/ | |
| fps | 341 |
| iterations | 26 |
| time_elapsed | 156 |
| total_timesteps | 53248 |
| train/ | |
| approx_kl | 2.6077032e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.96e+20 |
| learning_rate | 0.0001 |
| loss | 1.31e+15 |
| n_updates | 250 |
| policy_gradient_loss | -5.36e-07 |
| std | 1 |
| value_loss | 2.59e+15 |
-------------------------------------------
--------------------------------------------
| time/ | |
| fps | 341 |
| iterations | 27 |
| time_elapsed | 162 |
| total_timesteps | 55296 |
| train/ | |
| approx_kl | -1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.68e+20 |
| learning_rate | 0.0001 |
| loss | 1.33e+15 |
| n_updates | 260 |
| policy_gradient_loss | -3.77e-07 |
| std | 1 |
| value_loss | 2.51e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4875234.39450474
Sharpe: 1.0721137742534572
=================================
--------------------------------------------
| time/ | |
| fps | 340 |
| iterations | 28 |
| time_elapsed | 168 |
| total_timesteps | 57344 |
| train/ | |
| approx_kl | -1.2479722e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.66e+20 |
| learning_rate | 0.0001 |
| loss | 1.59e+15 |
| n_updates | 270 |
| policy_gradient_loss | -4.61e-07 |
| std | 1 |
| value_loss | 2.8e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4600459.210918712
Sharpe: 1.034756153745345
=================================
-------------------------------------------
| time/ | |
| fps | 340 |
| iterations | 29 |
| time_elapsed | 174 |
| total_timesteps | 59392 |
| train/ | |
| approx_kl | -4.284084e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.26e+20 |
| learning_rate | 0.0001 |
| loss | 8.07e+14 |
| n_updates | 280 |
| policy_gradient_loss | -5.44e-07 |
| std | 1 |
| value_loss | 1.62e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4526188.381438201
Sharpe: 1.0293846869900876
=================================
--------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 30 |
| time_elapsed | 180 |
| total_timesteps | 61440 |
| train/ | |
| approx_kl | -2.4214387e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.44e+20 |
| learning_rate | 0.0001 |
| loss | 1.12e+15 |
| n_updates | 290 |
| policy_gradient_loss | -5.65e-07 |
| std | 1 |
| value_loss | 2.1e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4487836.803716703
Sharpe: 1.010974660894394
=================================
--------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 31 |
| time_elapsed | 187 |
| total_timesteps | 63488 |
| train/ | |
| approx_kl | -2.6077032e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -4.47e+20 |
| learning_rate | 0.0001 |
| loss | 1.14e+15 |
| n_updates | 300 |
| policy_gradient_loss | -4.8e-07 |
| std | 1 |
| value_loss | 2.25e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4480729.650671386
Sharpe: 1.0219085518652522
=================================
--------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 32 |
| time_elapsed | 193 |
| total_timesteps | 65536 |
| train/ | |
| approx_kl | -2.0302832e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.87e+20 |
| learning_rate | 0.0001 |
| loss | 1.28e+15 |
| n_updates | 310 |
| policy_gradient_loss | -4.4e-07 |
| std | 1 |
| value_loss | 2.51e+15 |
--------------------------------------------
------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 33 |
| time_elapsed | 199 |
| total_timesteps | 67584 |
| train/ | |
| approx_kl | 1.359731e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.68e+20 |
| learning_rate | 0.0001 |
| loss | 1.24e+15 |
| n_updates | 320 |
| policy_gradient_loss | -4.51e-07 |
| std | 1 |
| value_loss | 2.66e+15 |
------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4399373.734699048
Sharpe: 1.005407087483561
=================================
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 34 |
| time_elapsed | 205 |
| total_timesteps | 69632 |
| train/ | |
| approx_kl | 2.2351742e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -2.29e+20 |
| learning_rate | 0.0001 |
| loss | 8.5e+14 |
| n_updates | 330 |
| policy_gradient_loss | -5.56e-07 |
| std | 1 |
| value_loss | 1.64e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4305742.921261859
Sharpe: 0.9945061913961891
=================================
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 35 |
| time_elapsed | 211 |
| total_timesteps | 71680 |
| train/ | |
| approx_kl | 1.3411045e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.11e+20 |
| learning_rate | 0.0001 |
| loss | 7.97e+14 |
| n_updates | 340 |
| policy_gradient_loss | -6.48e-07 |
| std | 1 |
| value_loss | 1.8e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4794175.629957249
Sharpe: 1.0611635246548963
=================================
--------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 36 |
| time_elapsed | 217 |
| total_timesteps | 73728 |
| train/ | |
| approx_kl | -3.3527613e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.16e+21 |
| learning_rate | 0.0001 |
| loss | 1.07e+15 |
| n_updates | 350 |
| policy_gradient_loss | -4.82e-07 |
| std | 1 |
| value_loss | 2.06e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4467487.416264421
Sharpe: 1.021012208464475
=================================
------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 37 |
| time_elapsed | 224 |
| total_timesteps | 75776 |
| train/ | |
| approx_kl | 5.401671e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -9.89e+20 |
| learning_rate | 0.0001 |
| loss | 1.46e+15 |
| n_updates | 360 |
| policy_gradient_loss | -4.78e-07 |
| std | 1 |
| value_loss | 2.75e+15 |
------------------------------------------
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 38 |
| time_elapsed | 229 |
| total_timesteps | 77824 |
| train/ | |
| approx_kl | 1.6763806e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.64e+20 |
| learning_rate | 0.0001 |
| loss | 1.25e+15 |
| n_updates | 370 |
| policy_gradient_loss | -4.54e-07 |
| std | 1 |
| value_loss | 2.57e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4806649.219027834
Sharpe: 1.0604486398186765
=================================
------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 39 |
| time_elapsed | 236 |
| total_timesteps | 79872 |
| train/ | |
| approx_kl | 4.284084e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.96e+20 |
| learning_rate | 0.0001 |
| loss | 1.28e+15 |
| n_updates | 380 |
| policy_gradient_loss | -5.9e-07 |
| std | 1 |
| value_loss | 2.44e+15 |
------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4653147.508966551
Sharpe: 1.043189911078732
=================================
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 40 |
| time_elapsed | 242 |
| total_timesteps | 81920 |
| train/ | |
| approx_kl | 6.3329935e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.04e+21 |
| learning_rate | 0.0001 |
| loss | 1.01e+15 |
| n_updates | 390 |
| policy_gradient_loss | -5.33e-07 |
| std | 1 |
| value_loss | 1.82e+15 |
-------------------------------------------
###Markdown
Model 3: **DDPG**
###Code
agent = DRLAgent(env = env_train)
DDPG_PARAMS = {"batch_size": 128, "buffer_size": 50000, "learning_rate": 0.001}
model_ddpg = agent.get_model("ddpg",model_kwargs = DDPG_PARAMS)
trained_ddpg = agent.train_model(model=model_ddpg,
tb_log_name='ddpg',
total_timesteps=50000)
###Output
Logging to tensorboard_log/ddpg/ddpg_2
=================================
begin_total_asset:1000000
end_total_asset:4625995.900359718
Sharpe: 1.040202670783119
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 4 |
| fps | 22 |
| time_elapsed | 439 |
| total timesteps | 10064 |
| train/ | |
| actor_loss | -6.99e+07 |
| critic_loss | 7.27e+12 |
| learning_rate | 0.001 |
| n_updates | 7548 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 8 |
| fps | 20 |
| time_elapsed | 980 |
| total timesteps | 20128 |
| train/ | |
| actor_loss | -1.44e+08 |
| critic_loss | 1.81e+13 |
| learning_rate | 0.001 |
| n_updates | 17612 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 12 |
| fps | 19 |
| time_elapsed | 1542 |
| total timesteps | 30192 |
| train/ | |
| actor_loss | -1.88e+08 |
| critic_loss | 2.72e+13 |
| learning_rate | 0.001 |
| n_updates | 27676 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 16 |
| fps | 18 |
| time_elapsed | 2133 |
| total timesteps | 40256 |
| train/ | |
| actor_loss | -2.15e+08 |
| critic_loss | 3.45e+13 |
| learning_rate | 0.001 |
| n_updates | 37740 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
---------------------------------
| time/ | |
| episodes | 20 |
| fps | 17 |
| time_elapsed | 2874 |
| total timesteps | 50320 |
| train/ | |
| actor_loss | -2.3e+08 |
| critic_loss | 4.05e+13 |
| learning_rate | 0.001 |
| n_updates | 47804 |
---------------------------------
###Markdown
Model 4: **SAC**
###Code
agent = DRLAgent(env = env_train)
SAC_PARAMS = {
"batch_size": 128,
"buffer_size": 100000,
"learning_rate": 0.0003,
"learning_starts": 100,
"ent_coef": "auto_0.1",
}
model_sac = agent.get_model("sac",model_kwargs = SAC_PARAMS)
trained_sac = agent.train_model(model=model_sac,
tb_log_name='sac',
total_timesteps=50000)
###Output
Logging to tensorboard_log/sac/sac_1
=================================
begin_total_asset:1000000
end_total_asset:4449463.498168942
Sharpe: 1.01245667390232
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418643.239765096
Sharpe: 1.0135796594260282
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418644.1960784905
Sharpe: 1.0135797537524718
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418659.429680678
Sharpe: 1.013581852537709
=================================
----------------------------------
| time/ | |
| episodes | 4 |
| fps | 12 |
| time_elapsed | 783 |
| total timesteps | 10064 |
| train/ | |
| actor_loss | -8.83e+07 |
| critic_loss | 6.57e+12 |
| ent_coef | 2.24 |
| ent_coef_loss | -205 |
| learning_rate | 0.0003 |
| n_updates | 9963 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4418651.576406099
Sharpe: 1.013581224026754
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418670.948269031
Sharpe: 1.0135838030234754
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418682.278829884
Sharpe: 1.013585596968056
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418791.911955293
Sharpe: 1.0136007328171013
=================================
----------------------------------
| time/ | |
| episodes | 8 |
| fps | 12 |
| time_elapsed | 1585 |
| total timesteps | 20128 |
| train/ | |
| actor_loss | -1.51e+08 |
| critic_loss | 1.12e+13 |
| ent_coef | 41.7 |
| ent_coef_loss | -670 |
| learning_rate | 0.0003 |
| n_updates | 20027 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4418737.365107464
Sharpe: 1.0135970410224868
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418754.895735274
Sharpe: 1.0135965589029627
=================================
=================================
begin_total_asset:1000000
end_total_asset:4419325.814567342
Sharpe: 1.0136807224228588
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418142.473513333
Sharpe: 1.0135234795926031
=================================
----------------------------------
| time/ | |
| episodes | 12 |
| fps | 12 |
| time_elapsed | 2400 |
| total timesteps | 30192 |
| train/ | |
| actor_loss | -1.85e+08 |
| critic_loss | 1.87e+13 |
| ent_coef | 725 |
| ent_coef_loss | -673 |
| learning_rate | 0.0003 |
| n_updates | 30091 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4422046.188863339
Sharpe: 1.0140936726052256
=================================
=================================
begin_total_asset:1000000
end_total_asset:4424919.463828854
Sharpe: 1.014521127041106
=================================
=================================
begin_total_asset:1000000
end_total_asset:4427483.152494239
Sharpe: 1.0148626804754584
=================================
=================================
begin_total_asset:1000000
end_total_asset:4460697.650185859
Sharpe: 1.019852362102548
=================================
----------------------------------
| time/ | |
| episodes | 16 |
| fps | 12 |
| time_elapsed | 3210 |
| total timesteps | 40256 |
| train/ | |
| actor_loss | -1.93e+08 |
| critic_loss | 1.62e+13 |
| ent_coef | 1.01e+04 |
| ent_coef_loss | -238 |
| learning_rate | 0.0003 |
| n_updates | 40155 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4434035.982803257
Sharpe: 1.0161512551319891
=================================
=================================
begin_total_asset:1000000
end_total_asset:4454728.906041551
Sharpe: 1.018484863448905
=================================
=================================
begin_total_asset:1000000
end_total_asset:4475667.120269234
Sharpe: 1.0215545521682856
=================================
###Markdown
TradingAssume that we have $1,000,000 initial capital at 2019-01-01. We use the DDPG model to trade Dow jones 30 stocks.
###Code
trade = data_split(df,'2019-01-01', '2021-01-01')
e_trade_gym = StockPortfolioEnv(df = trade, **env_kwargs)
trade.shape
df_daily_return, df_actions = DRLAgent.DRL_prediction(model=trained_a2c,
environment = e_trade_gym)
df_daily_return.head()
df_actions.head()
df_actions.to_csv('df_actions.csv')
###Output
_____no_output_____
###Markdown
Part 7: Backtest Our StrategyBacktesting plays a key role in evaluating the performance of a trading strategy. Automated backtesting tool is preferred because it reduces the human error. We usually use the Quantopian pyfolio package to backtest our trading strategies. It is easy to use and consists of various individual plots that provide a comprehensive image of the performance of a trading strategy. 7.1 BackTestStatspass in df_account_value, this information is stored in env class
###Code
from pyfolio import timeseries
DRL_strat = convert_daily_return_to_pyfolio_ts(df_daily_return)
perf_func = timeseries.perf_stats
perf_stats_all = perf_func( returns=DRL_strat,
factor_returns=DRL_strat,
positions=None, transactions=None, turnover_denom="AGB")
print("==============DRL Strategy Stats===========")
perf_stats_all
###Output
==============DRL Strategy Stats===========
###Markdown
7.2 BackTestPlot
###Code
import pyfolio
%matplotlib inline
baseline_df = get_baseline(
ticker='^DJI', start='2019-01-01', end='2021-01-01'
)
baseline_returns = get_daily_return(baseline_df, value_col_name="close")
with pyfolio.plotting.plotting_context(font_scale=1.1):
pyfolio.create_full_tear_sheet(returns = DRL_strat,
benchmark_rets=baseline_returns, set_context=False)
###Output
[*********************100%***********************] 1 of 1 completed
Shape of DataFrame: (505, 8)
###Markdown
Deep Reinforcement Learning for Stock Trading from Scratch: Portfolio AllocationTutorials to use OpenAI DRL to perform portfolio allocation in one Jupyter Notebook | Presented at NeurIPS 2020: Deep RL Workshop* This blog is based on our paper: FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance, presented at NeurIPS 2020: Deep RL Workshop.* Check out medium blog for detailed explanations: * Please report any issues to our Github: https://github.com/AI4Finance-LLC/FinRL-Library/issues* **Pytorch Version** Content * [1. Problem Definition](0)* [2. Getting Started - Load Python packages](1) * [2.1. Install Packages](1.1) * [2.2. Check Additional Packages](1.2) * [2.3. Import Packages](1.3) * [2.4. Create Folders](1.4)* [3. Download Data](2)* [4. Preprocess Data](3) * [4.1. Technical Indicators](3.1) * [4.2. Perform Feature Engineering](3.2)* [5.Build Environment](4) * [5.1. Training & Trade Data Split](4.1) * [5.2. User-defined Environment](4.2) * [5.3. Initialize Environment](4.3) * [6.Implement DRL Algorithms](5) * [7.Backtesting Performance](6) * [7.1. BackTestStats](6.1) * [7.2. BackTestPlot](6.2) * [7.3. Baseline Stats](6.3) * [7.3. Compare to Stock Market Index](6.4)
Part 1. Problem Definition This problem is to design an automated trading solution for single stock trading. We model the stock trading process as a Markov Decision Process (MDP). We then formulate our trading goal as a maximization problem.
The algorithm is trained using Deep Reinforcement Learning (DRL) algorithms and the components of the reinforcement learning environment are:
* Action: The action space describes the allowed actions that the agent interacts with the
environment. Normally, a ∈ A includes three actions: a ∈ {−1, 0, 1}, where −1, 0, 1 represent
selling, holding, and buying one stock. Also, an action can be carried upon multiple shares. We use
an action space {−k, ..., −1, 0, 1, ..., k}, where k denotes the number of shares. For example, "Buy
10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or −10, respectively
* Reward function: r(s, a, s′) is the incentive mechanism for an agent to learn a better action. The change of the portfolio value when action a is taken at state s and arriving at new state s', i.e., r(s, a, s′) = v′ − v, where v′ and v represent the portfolio
values at state s′ and s, respectively
* State: The state space describes the observations that the agent receives from the environment. Just as a human trader needs to analyze various information before executing a trade, so
our trading agent observes many different features to better learn in an interactive environment.
* Environment: Dow 30 consituents
The data of the single stock that we will be using for this case study is obtained from Yahoo Finance API. The data contains Open-High-Low-Close price and volume.
Part 2. Getting Started- Load Python Packages
2.1. Install all the packages through FinRL library
###Code
## install finrl library
!pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
###Output
Collecting git+https://github.com/AI4Finance-LLC/FinRL-Library.git
Cloning https://github.com/AI4Finance-LLC/FinRL-Library.git to /tmp/pip-req-build-bwdyljxc
Running command git clone -q https://github.com/AI4Finance-LLC/FinRL-Library.git /tmp/pip-req-build-bwdyljxc
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (1.19.5)
Requirement already satisfied: pandas>=1.1.5 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (1.1.5)
Collecting stockstats
Downloading https://files.pythonhosted.org/packages/32/41/d3828c5bc0a262cb3112a4024108a3b019c183fa3b3078bff34bf25abf91/stockstats-0.3.2-py2.py3-none-any.whl
Collecting yfinance
Downloading https://files.pythonhosted.org/packages/7a/e8/b9d7104d3a4bf39924799067592d9e59119fcfc900a425a12e80a3123ec8/yfinance-0.1.55.tar.gz
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (3.2.2)
Requirement already satisfied: scikit-learn>=0.21.0 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (0.22.2.post1)
Requirement already satisfied: gym>=0.17 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (0.17.3)
Collecting stable-baselines3[extra]
[?25l Downloading https://files.pythonhosted.org/packages/76/7c/ec89fd9a51c2ff640f150479069be817136c02f02349b5dd27a6e3bb8b3d/stable_baselines3-0.10.0-py3-none-any.whl (145kB)
[K |████████████████████████████████| 153kB 6.0MB/s
[?25hRequirement already satisfied: pytest in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (3.6.4)
Requirement already satisfied: setuptools>=41.4.0 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (53.0.0)
Requirement already satisfied: wheel>=0.33.6 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (0.36.2)
Collecting pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2
Cloning https://github.com/quantopian/pyfolio.git to /tmp/pip-install-jk1inqx3/pyfolio
Running command git clone -q https://github.com/quantopian/pyfolio.git /tmp/pip-install-jk1inqx3/pyfolio
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=1.1.5->finrl==0.0.3) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=1.1.5->finrl==0.0.3) (2018.9)
Collecting int-date>=0.1.7
Downloading https://files.pythonhosted.org/packages/43/27/31803df15173ab341fe7548c14154b54227dfd8f630daa09a1c6e7db52f7/int_date-0.1.8-py2.py3-none-any.whl
Requirement already satisfied: requests>=2.20 in /usr/local/lib/python3.7/dist-packages (from yfinance->finrl==0.0.3) (2.23.0)
Requirement already satisfied: multitasking>=0.0.7 in /usr/local/lib/python3.7/dist-packages (from yfinance->finrl==0.0.3) (0.0.9)
Collecting lxml>=4.5.1
[?25l Downloading https://files.pythonhosted.org/packages/d2/88/b25778f17e5320c1c58f8c5060fb5b037288e162bd7554c30799e9ea90db/lxml-4.6.2-cp37-cp37m-manylinux1_x86_64.whl (5.5MB)
[K |████████████████████████████████| 5.5MB 8.8MB/s
[?25hRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.0.3) (2.4.7)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.0.3) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.0.3) (1.3.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.0->finrl==0.0.3) (1.0.1)
Requirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.0->finrl==0.0.3) (1.4.1)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym>=0.17->finrl==0.0.3) (1.5.0)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym>=0.17->finrl==0.0.3) (1.3.0)
Requirement already satisfied: torch>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (1.7.0+cu101)
Requirement already satisfied: tensorboard; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (2.4.1)
Requirement already satisfied: psutil; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (5.4.8)
Requirement already satisfied: opencv-python; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (4.1.2.30)
Requirement already satisfied: pillow; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (7.0.0)
Requirement already satisfied: atari-py~=0.2.0; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (0.2.6)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (1.15.0)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (20.3.0)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (8.7.0)
Requirement already satisfied: pluggy<0.8,>=0.5 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (0.7.1)
Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (1.10.0)
Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (1.4.0)
Requirement already satisfied: ipython>=3.2.3 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (5.5.0)
Requirement already satisfied: seaborn>=0.7.1 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.11.1)
Collecting empyrical>=0.5.0
[?25l Downloading https://files.pythonhosted.org/packages/74/43/1b997c21411c6ab7c96dc034e160198272c7a785aeea7654c9bcf98bec83/empyrical-0.5.5.tar.gz (52kB)
[K |████████████████████████████████| 61kB 6.1MB/s
[?25hRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.0.3) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.0.3) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.0.3) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.0.3) (2020.12.5)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym>=0.17->finrl==0.0.3) (0.16.0)
Requirement already satisfied: dataclasses in /usr/local/lib/python3.7/dist-packages (from torch>=1.4.0->stable-baselines3[extra]->finrl==0.0.3) (0.6)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch>=1.4.0->stable-baselines3[extra]->finrl==0.0.3) (3.7.4.3)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.12.4)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.32.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (0.4.2)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.3.3)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (0.10.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.8.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.0.1)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.27.0)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.7.5)
Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (4.8.0)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (1.0.18)
Requirement already satisfied: pygments in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (2.6.1)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (4.3.3)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.8.1)
Requirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (4.4.2)
Requirement already satisfied: pandas-datareader>=0.2 in /usr/local/lib/python3.7/dist-packages (from empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.9.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.3.0)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.4.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (0.2.8)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3.6" in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (4.7.1)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (4.2.1)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.7/dist-packages (from pexpect; sys_platform != "win32"->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.7.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.2.5)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.7/dist-packages (from traitlets>=4.2->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.2.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.4.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (0.4.8)
Building wheels for collected packages: finrl, yfinance, pyfolio, empyrical
Building wheel for finrl (setup.py) ... [?25l[?25hdone
Created wheel for finrl: filename=finrl-0.0.3-cp37-none-any.whl size=38201 sha256=680913f069c396f38e0c508600b450102190f08e0b0bba53c58c334981ccbe6c
Stored in directory: /tmp/pip-ephem-wheel-cache-a1bbwmjm/wheels/9c/19/bf/c644def96612df1ad42c94d5304966797eaa3221dffc5efe0b
Building wheel for yfinance (setup.py) ... [?25l[?25hdone
Created wheel for yfinance: filename=yfinance-0.1.55-py2.py3-none-any.whl size=22616 sha256=2a578f51d56d3d8fff23683c041d6815f487abf3c6c97d4739d122055a6599b3
Stored in directory: /root/.cache/pip/wheels/04/98/cc/2702a4242d60bdc14f48b4557c427ded1fe92aedf257d4565c
Building wheel for pyfolio (setup.py) ... [?25l[?25hdone
Created wheel for pyfolio: filename=pyfolio-0.9.2+75.g4b901f6-cp37-none-any.whl size=75764 sha256=7e1ceb3360e57235c3d97bdbb36969c8ac05da709aa781413f1eca9088669323
Stored in directory: /tmp/pip-ephem-wheel-cache-a1bbwmjm/wheels/43/ce/d9/6752fb6e03205408773235435205a0519d2c608a94f1976e56
Building wheel for empyrical (setup.py) ... [?25l[?25hdone
Created wheel for empyrical: filename=empyrical-0.5.5-cp37-none-any.whl size=39764 sha256=6b772c8c03b900a08799fdd831ee627277cc2c9241dc3103e2602fdd21781bb1
Stored in directory: /root/.cache/pip/wheels/ea/b2/c8/6769d8444d2f2e608fae2641833110668d0ffd1abeb2e9f3fc
Successfully built finrl yfinance pyfolio empyrical
Installing collected packages: int-date, stockstats, lxml, yfinance, stable-baselines3, empyrical, pyfolio, finrl
Found existing installation: lxml 4.2.6
Uninstalling lxml-4.2.6:
Successfully uninstalled lxml-4.2.6
Successfully installed empyrical-0.5.5 finrl-0.0.3 int-date-0.1.8 lxml-4.6.2 pyfolio-0.9.2+75.g4b901f6 stable-baselines3-0.10.0 stockstats-0.3.2 yfinance-0.1.55
###Markdown
2.2. Check if the additional packages needed are present, if not install them. * Yahoo Finance API* pandas* numpy* matplotlib* stockstats* OpenAI gym* stable-baselines* tensorflow* pyfolio
2.3. Import Packages
###Code
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.use('Agg')
import datetime
from finrl.config import config
from finrl.marketdata.yahoodownloader import YahooDownloader
from finrl.preprocessing.preprocessors import FeatureEngineer
from finrl.preprocessing.data import data_split
from finrl.env.env_portfolio import StockPortfolioEnv
from finrl.model.models import DRLAgent
from finrl.trade.backtest import backtest_stats, backtest_plot, get_daily_return, get_baseline,convert_daily_return_to_pyfolio_ts
import sys
sys.path.append("../FinRL-Library")
###Output
c:\Users\User\miniconda3\envs\finrl\lib\site-packages\pyfolio\pos.py:26: UserWarning: Module "zipline.assets" not found; multipliers will not be applied to position notionals.
warnings.warn(
###Markdown
2.4. Create Folders
###Code
import os
if not os.path.exists("./" + config.DATA_SAVE_DIR):
os.makedirs("./" + config.DATA_SAVE_DIR)
if not os.path.exists("./" + config.TRAINED_MODEL_DIR):
os.makedirs("./" + config.TRAINED_MODEL_DIR)
if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR):
os.makedirs("./" + config.TENSORBOARD_LOG_DIR)
if not os.path.exists("./" + config.RESULTS_DIR):
os.makedirs("./" + config.RESULTS_DIR)
###Output
_____no_output_____
###Markdown
Part 3. Download DataYahoo Finance is a website that provides stock data, financial news, financial reports, etc. All the data provided by Yahoo Finance is free.* FinRL uses a class **YahooDownloader** to fetch data from Yahoo Finance API* Call Limit: Using the Public API (without authentication), you are limited to 2,000 requests per hour per IP (or up to a total of 48,000 requests a day).
###Code
print(config.DOW_30_TICKER)
df = YahooDownloader(start_date = '2008-01-01',
end_date = '2021-01-01',
ticker_list = config.DOW_30_TICKER).fetch_data()
df.head(10)
df.shape
###Output
_____no_output_____
###Markdown
Part 4: Preprocess DataData preprocessing is a crucial step for training a high quality machine learning model. We need to check for missing data and do feature engineering in order to convert the data into a model-ready state.* Add technical indicators. In practical trading, various information needs to be taken into account, for example the historical stock prices, current holding shares, technical indicators, etc. In this article, we demonstrate two trend-following technical indicators: MACD and RSI.* Add turbulence index. Risk-aversion reflects whether an investor will choose to preserve the capital. It also influences one's trading strategy when facing different market volatility level. To control the risk in a worst-case scenario, such as financial crisis of 2007–2008, FinRL employs the financial turbulence index that measures extreme asset price fluctuation.
###Code
fe = FeatureEngineer(
use_technical_indicator=True,
use_turbulence=False,
user_defined_feature = False)
df = fe.preprocess_data(df)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Add covariance matrix as states
###Code
# add covariance matrix as states
df=df.sort_values(['date','tic'],ignore_index=True)
df.index = df.date.factorize()[0]
cov_list = []
# look back is one year
lookback=252
for i in range(lookback,len(df.index.unique())):
data_lookback = df.loc[i-lookback:i,:]
price_lookback=data_lookback.pivot_table(index = 'date',columns = 'tic', values = 'close')
return_lookback = price_lookback.pct_change().dropna()
covs = return_lookback.cov().values
cov_list.append(covs)
df_cov = pd.DataFrame({'date':df.date.unique()[lookback:],'cov_list':cov_list})
df = df.merge(df_cov, on='date')
df = df.sort_values(['date','tic']).reset_index(drop=True)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Part 5. Design EnvironmentConsidering the stochastic and interactive nature of the automated stock trading tasks, a financial task is modeled as a **Markov Decision Process (MDP)** problem. The training process involves observing stock price change, taking an action and reward's calculation to have the agent adjusting its strategy accordingly. By interacting with the environment, the trading agent will derive a trading strategy with the maximized rewards as time proceeds.Our trading environments, based on OpenAI Gym framework, simulate live stock markets with real market data according to the principle of time-driven simulation.The action space describes the allowed actions that the agent interacts with the environment. Normally, action a includes three actions: {-1, 0, 1}, where -1, 0, 1 represent selling, holding, and buying one share. Also, an action can be carried upon multiple shares. We use an action space {-k,…,-1, 0, 1, …, k}, where k denotes the number of shares to buy and -k denotes the number of shares to sell. For example, "Buy 10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or -10, respectively. The continuous action space needs to be normalized to [-1, 1], since the policy is defined on a Gaussian distribution, which needs to be normalized and symmetric. Training data split: 2009-01-01 to 2018-12-31
###Code
train = data_split(df, '2009-01-01','2019-01-01')
#trade = data_split(df, '2020-01-01', config.END_DATE)
train.head()
###Output
_____no_output_____
###Markdown
Environment for Portfolio Allocation
###Code
import numpy as np
import pandas as pd
from gym.utils import seeding
import gym
from gym import spaces
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from stable_baselines3.common.vec_env import DummyVecEnv
class StockPortfolioEnv(gym.Env):
"""A single stock trading environment for OpenAI gym
Attributes
----------
df: DataFrame
input data
stock_dim : int
number of unique stocks
hmax : int
maximum number of shares to trade
initial_amount : int
start money
transaction_cost_pct: float
transaction cost percentage per trade
reward_scaling: float
scaling factor for reward, good for training
state_space: int
the dimension of input features
action_space: int
equals stock dimension
tech_indicator_list: list
a list of technical indicator names
turbulence_threshold: int
a threshold to control risk aversion
day: int
an increment number to control date
Methods
-------
_sell_stock()
perform sell action based on the sign of the action
_buy_stock()
perform buy action based on the sign of the action
step()
at each step the agent will return actions, then
we will calculate the reward, and return the next observation.
reset()
reset the environment
render()
use render to return other functions
save_asset_memory()
return account value at each time step
save_action_memory()
return actions/positions at each time step
"""
metadata = {'render.modes': ['human']}
def __init__(self,
df,
stock_dim,
hmax,
initial_amount,
transaction_cost_pct,
reward_scaling,
state_space,
action_space,
tech_indicator_list,
turbulence_threshold=None,
lookback=252,
day = 0):
#super(StockEnv, self).__init__()
#money = 10 , scope = 1
self.day = day
self.lookback=lookback
self.df = df
self.stock_dim = stock_dim
self.hmax = hmax
self.initial_amount = initial_amount
self.transaction_cost_pct =transaction_cost_pct
self.reward_scaling = reward_scaling
self.state_space = state_space
self.action_space = action_space
self.tech_indicator_list = tech_indicator_list
# action_space normalization and shape is self.stock_dim
self.action_space = spaces.Box(low = 0, high = 1,shape = (self.action_space,))
# Shape = (34, 30)
# covariance matrix + technical indicators
self.observation_space = spaces.Box(low=-np.inf, high=np.inf, shape = (self.state_space+len(self.tech_indicator_list),self.state_space))
# load data from a pandas dataframe
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.terminal = False
self.turbulence_threshold = turbulence_threshold
# initalize state: inital portfolio return + individual stock return + individual weights
self.portfolio_value = self.initial_amount
# memorize portfolio value each step
self.asset_memory = [self.initial_amount]
# memorize portfolio return each step
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
def step(self, actions):
# print(self.day)
self.terminal = self.day >= len(self.df.index.unique())-1
# print(actions)
if self.terminal:
df = pd.DataFrame(self.portfolio_return_memory)
df.columns = ['daily_return']
plt.plot(df.daily_return.cumsum(),'r')
plt.savefig('results/cumulative_reward.png')
plt.close()
plt.plot(self.portfolio_return_memory,'r')
plt.savefig('results/rewards.png')
plt.close()
print("=================================")
print("begin_total_asset:{}".format(self.asset_memory[0]))
print("end_total_asset:{}".format(self.portfolio_value))
df_daily_return = pd.DataFrame(self.portfolio_return_memory)
df_daily_return.columns = ['daily_return']
if df_daily_return['daily_return'].std() !=0:
sharpe = (252**0.5)*df_daily_return['daily_return'].mean()/ \
df_daily_return['daily_return'].std()
print("Sharpe: ",sharpe)
print("=================================")
return self.state, self.reward, self.terminal,{}
else:
#print("Model actions: ",actions)
# actions are the portfolio weight
# normalize to sum of 1
#if (np.array(actions) - np.array(actions).min()).sum() != 0:
# norm_actions = (np.array(actions) - np.array(actions).min()) / (np.array(actions) - np.array(actions).min()).sum()
#else:
# norm_actions = actions
weights = self.softmax_normalization(actions)
#print("Normalized actions: ", weights)
self.actions_memory.append(weights)
last_day_memory = self.data
#load next state
self.day += 1
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
#print(self.state)
# calcualte portfolio return
# individual stocks' return * weight
portfolio_return = sum(((self.data.close.values / last_day_memory.close.values)-1)*weights)
# update portfolio value
new_portfolio_value = self.portfolio_value*(1+portfolio_return)
self.portfolio_value = new_portfolio_value
# save into memory
self.portfolio_return_memory.append(portfolio_return)
self.date_memory.append(self.data.date.unique()[0])
self.asset_memory.append(new_portfolio_value)
# the reward is the new portfolio value or end portfolo value
self.reward = new_portfolio_value
#print("Step reward: ", self.reward)
#self.reward = self.reward*self.reward_scaling
return self.state, self.reward, self.terminal, {}
def reset(self):
self.asset_memory = [self.initial_amount]
self.day = 0
self.data = self.df.loc[self.day,:]
# load states
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.portfolio_value = self.initial_amount
#self.cost = 0
#self.trades = 0
self.terminal = False
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
return self.state
def render(self, mode='human'):
return self.state
def softmax_normalization(self, actions):
numerator = np.exp(actions)
denominator = np.sum(np.exp(actions))
softmax_output = numerator/denominator
return softmax_output
def save_asset_memory(self):
date_list = self.date_memory
portfolio_return = self.portfolio_return_memory
#print(len(date_list))
#print(len(asset_list))
df_account_value = pd.DataFrame({'date':date_list,'daily_return':portfolio_return})
return df_account_value
def save_action_memory(self):
# date and close price length must match actions length
date_list = self.date_memory
df_date = pd.DataFrame(date_list)
df_date.columns = ['date']
action_list = self.actions_memory
df_actions = pd.DataFrame(action_list)
df_actions.columns = self.data.tic.values
df_actions.index = df_date.date
#df_actions = pd.DataFrame({'date':date_list,'actions':action_list})
return df_actions
def _seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def get_sb_env(self):
e = DummyVecEnv([lambda: self])
obs = e.reset()
return e, obs
stock_dimension = len(train.tic.unique())
state_space = stock_dimension
print(f"Stock Dimension: {stock_dimension}, State Space: {state_space}")
env_kwargs = {
"hmax": 100,
"initial_amount": 1000000,
"transaction_cost_pct": 0.001,
"state_space": state_space,
"stock_dim": stock_dimension,
"tech_indicator_list": config.TECHNICAL_INDICATORS_LIST,
"action_space": stock_dimension,
"reward_scaling": 1e-4
}
e_train_gym = StockPortfolioEnv(df = train, **env_kwargs)
env_train, _ = e_train_gym.get_sb_env()
print(type(env_train))
###Output
<class 'stable_baselines3.common.vec_env.dummy_vec_env.DummyVecEnv'>
###Markdown
Part 6: Implement DRL Algorithms
* The implementation of the DRL algorithms are based on **OpenAI Baselines** and **Stable Baselines**. Stable Baselines is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups.
* FinRL library includes fine-tuned standard DRL algorithms, such as DQN, DDPG,
Multi-Agent DDPG, PPO, SAC, A2C and TD3. We also allow users to
design their own DRL algorithms by adapting these DRL algorithms.
###Code
# initialize
agent = DRLAgent(env = env_train)
###Output
_____no_output_____
###Markdown
Model 1: **A2C**
###Code
agent = DRLAgent(env = env_train)
A2C_PARAMS = {"n_steps": 5, "ent_coef": 0.005, "learning_rate": 0.0002}
model_a2c = agent.get_model(model_name="a2c",model_kwargs = A2C_PARAMS)
trained_a2c = agent.train_model(model=model_a2c,
tb_log_name='a2c',
total_timesteps=60000)
###Output
Logging to tensorboard_log/a2c\a2c_1
------------------------------------
| time/ | |
| fps | 95 |
| iterations | 100 |
| time_elapsed | 5 |
| total_timesteps | 500 |
| train/ | |
| entropy_loss | -42.5 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 99 |
| policy_loss | 2.03e+08 |
| std | 0.997 |
| value_loss | 2.75e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 95 |
| iterations | 200 |
| time_elapsed | 10 |
| total_timesteps | 1000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 199 |
| policy_loss | 2.54e+08 |
| std | 0.996 |
| value_loss | 4.45e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 95 |
| iterations | 300 |
| time_elapsed | 15 |
| total_timesteps | 1500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 299 |
| policy_loss | 4.08e+08 |
| std | 0.995 |
| value_loss | 1.08e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 94 |
| iterations | 400 |
| time_elapsed | 21 |
| total_timesteps | 2000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 399 |
| policy_loss | 4.43e+08 |
| std | 0.994 |
| value_loss | 1.45e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 94 |
| iterations | 500 |
| time_elapsed | 26 |
| total_timesteps | 2500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 499 |
| policy_loss | 6.16e+08 |
| std | 0.994 |
| value_loss | 2.81e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4793289.715628677
Sharpe: 1.053881816582265
=================================
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 600 |
| time_elapsed | 32 |
| total_timesteps | 3000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 599 |
| policy_loss | 1.79e+08 |
| std | 0.994 |
| value_loss | 2.35e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 92 |
| iterations | 700 |
| time_elapsed | 37 |
| total_timesteps | 3500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 699 |
| policy_loss | 2.32e+08 |
| std | 0.994 |
| value_loss | 3.73e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 93 |
| iterations | 800 |
| time_elapsed | 42 |
| total_timesteps | 4000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 799 |
| policy_loss | 3.57e+08 |
| std | 0.994 |
| value_loss | 9.07e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 900 |
| time_elapsed | 48 |
| total_timesteps | 4500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 899 |
| policy_loss | 3.85e+08 |
| std | 0.993 |
| value_loss | 1.14e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 1000 |
| time_elapsed | 53 |
| total_timesteps | 5000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 999 |
| policy_loss | 5.72e+08 |
| std | 0.993 |
| value_loss | 2.35e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4378951.819743196
Sharpe: 1.0031412042385455
=================================
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 1100 |
| time_elapsed | 59 |
| total_timesteps | 5500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 1099 |
| policy_loss | 1.81e+08 |
| std | 0.993 |
| value_loss | 2.53e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 1200 |
| time_elapsed | 65 |
| total_timesteps | 6000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 1199 |
| policy_loss | 2.77e+08 |
| std | 0.993 |
| value_loss | 4.27e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 1300 |
| time_elapsed | 70 |
| total_timesteps | 6500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 1299 |
| policy_loss | 3.64e+08 |
| std | 0.992 |
| value_loss | 9.02e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 1400 |
| time_elapsed | 75 |
| total_timesteps | 7000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 1399 |
| policy_loss | 4.09e+08 |
| std | 0.992 |
| value_loss | 1.25e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 92 |
| iterations | 1500 |
| time_elapsed | 81 |
| total_timesteps | 7500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 1499 |
| policy_loss | 5.82e+08 |
| std | 0.992 |
| value_loss | 2.63e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4728968.986130338
Sharpe: 1.0425309298729608
=================================
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 1600 |
| time_elapsed | 86 |
| total_timesteps | 8000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 1599 |
| policy_loss | 1.98e+08 |
| std | 0.992 |
| value_loss | 2.42e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 91 |
| iterations | 1700 |
| time_elapsed | 92 |
| total_timesteps | 8500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 1699 |
| policy_loss | 2.67e+08 |
| std | 0.992 |
| value_loss | 4.37e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 92 |
| iterations | 1800 |
| time_elapsed | 97 |
| total_timesteps | 9000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 1799 |
| policy_loss | 3.71e+08 |
| std | 0.991 |
| value_loss | 9.47e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 1900 |
| time_elapsed | 103 |
| total_timesteps | 9500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 1899 |
| policy_loss | 4.94e+08 |
| std | 0.991 |
| value_loss | 1.39e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 2000 |
| time_elapsed | 108 |
| total_timesteps | 10000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 1999 |
| policy_loss | 6.35e+08 |
| std | 0.99 |
| value_loss | 2.99e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4805614.282393655
Sharpe: 1.0557977998086014
=================================
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 2100 |
| time_elapsed | 114 |
| total_timesteps | 10500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | 2.98e-07 |
| learning_rate | 0.0002 |
| n_updates | 2099 |
| policy_loss | 1.74e+08 |
| std | 0.99 |
| value_loss | 2e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 2200 |
| time_elapsed | 119 |
| total_timesteps | 11000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | 1.79e-07 |
| learning_rate | 0.0002 |
| n_updates | 2199 |
| policy_loss | 2.57e+08 |
| std | 0.991 |
| value_loss | 3.91e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 2300 |
| time_elapsed | 124 |
| total_timesteps | 11500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 2299 |
| policy_loss | 3.66e+08 |
| std | 0.991 |
| value_loss | 8.14e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 2400 |
| time_elapsed | 130 |
| total_timesteps | 12000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 2399 |
| policy_loss | 3.75e+08 |
| std | 0.99 |
| value_loss | 1.18e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 2500 |
| time_elapsed | 135 |
| total_timesteps | 12500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | 1.79e-07 |
| learning_rate | 0.0002 |
| n_updates | 2499 |
| policy_loss | 6.22e+08 |
| std | 0.991 |
| value_loss | 2.51e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4485121.530397891
Sharpe: 1.015114912469572
=================================
-------------------------------------
| time/ | |
| fps | 91 |
| iterations | 2600 |
| time_elapsed | 141 |
| total_timesteps | 13000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 2599 |
| policy_loss | 1.58e+08 |
| std | 0.99 |
| value_loss | 2e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 2700 |
| time_elapsed | 147 |
| total_timesteps | 13500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 2699 |
| policy_loss | 2.51e+08 |
| std | 0.99 |
| value_loss | 4.43e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 91 |
| iterations | 2800 |
| time_elapsed | 152 |
| total_timesteps | 14000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 2799 |
| policy_loss | 3.57e+08 |
| std | 0.99 |
| value_loss | 9.19e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 2900 |
| time_elapsed | 158 |
| total_timesteps | 14500 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 2899 |
| policy_loss | 4.27e+08 |
| std | 0.989 |
| value_loss | 1.34e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 3000 |
| time_elapsed | 163 |
| total_timesteps | 15000 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 2999 |
| policy_loss | 5.72e+08 |
| std | 0.989 |
| value_loss | 2.61e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4711570.909715502
Sharpe: 1.0398787758840964
=================================
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 3100 |
| time_elapsed | 169 |
| total_timesteps | 15500 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 3099 |
| policy_loss | 1.74e+08 |
| std | 0.988 |
| value_loss | 2.05e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 3200 |
| time_elapsed | 174 |
| total_timesteps | 16000 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 3199 |
| policy_loss | 2.38e+08 |
| std | 0.987 |
| value_loss | 3.94e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 3300 |
| time_elapsed | 180 |
| total_timesteps | 16500 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 3299 |
| policy_loss | 3.59e+08 |
| std | 0.987 |
| value_loss | 9.77e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 91 |
| iterations | 3400 |
| time_elapsed | 185 |
| total_timesteps | 17000 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 3399 |
| policy_loss | 4.93e+08 |
| std | 0.987 |
| value_loss | 1.48e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 3500 |
| time_elapsed | 190 |
| total_timesteps | 17500 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 3499 |
| policy_loss | 6.66e+08 |
| std | 0.986 |
| value_loss | 2.82e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4983520.458955503
Sharpe: 1.073897644974824
=================================
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 3600 |
| time_elapsed | 196 |
| total_timesteps | 18000 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 3599 |
| policy_loss | 1.84e+08 |
| std | 0.986 |
| value_loss | 2.11e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 3700 |
| time_elapsed | 201 |
| total_timesteps | 18500 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | 1.79e-07 |
| learning_rate | 0.0002 |
| n_updates | 3699 |
| policy_loss | 2.7e+08 |
| std | 0.985 |
| value_loss | 4.45e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 3800 |
| time_elapsed | 207 |
| total_timesteps | 19000 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 3799 |
| policy_loss | 3.9e+08 |
| std | 0.985 |
| value_loss | 1.11e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 3900 |
| time_elapsed | 212 |
| total_timesteps | 19500 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 3899 |
| policy_loss | 4.66e+08 |
| std | 0.984 |
| value_loss | 1.43e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 4000 |
| time_elapsed | 217 |
| total_timesteps | 20000 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 3999 |
| policy_loss | 6.37e+08 |
| std | 0.983 |
| value_loss | 2.99e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5161228.0320492955
Sharpe: 1.0964722017407302
=================================
-------------------------------------
| time/ | |
| fps | 91 |
| iterations | 4100 |
| time_elapsed | 223 |
| total_timesteps | 20500 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 4099 |
| policy_loss | 1.86e+08 |
| std | 0.983 |
| value_loss | 2.13e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 4200 |
| time_elapsed | 228 |
| total_timesteps | 21000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 4199 |
| policy_loss | 2.29e+08 |
| std | 0.983 |
| value_loss | 3.76e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 4300 |
| time_elapsed | 234 |
| total_timesteps | 21500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 4299 |
| policy_loss | 3.36e+08 |
| std | 0.982 |
| value_loss | 8.73e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 4400 |
| time_elapsed | 239 |
| total_timesteps | 22000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 4399 |
| policy_loss | 4.11e+08 |
| std | 0.982 |
| value_loss | 1.2e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 4500 |
| time_elapsed | 244 |
| total_timesteps | 22500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 4499 |
| policy_loss | 6.03e+08 |
| std | 0.983 |
| value_loss | 2.34e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4620640.595635172
Sharpe: 1.0245905173891738
=================================
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 4600 |
| time_elapsed | 250 |
| total_timesteps | 23000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 4599 |
| policy_loss | 1.81e+08 |
| std | 0.982 |
| value_loss | 2.07e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 91 |
| iterations | 4700 |
| time_elapsed | 255 |
| total_timesteps | 23500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 4699 |
| policy_loss | 2.44e+08 |
| std | 0.982 |
| value_loss | 3.99e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 91 |
| iterations | 4800 |
| time_elapsed | 260 |
| total_timesteps | 24000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 4799 |
| policy_loss | 3.82e+08 |
| std | 0.982 |
| value_loss | 9.16e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 4900 |
| time_elapsed | 266 |
| total_timesteps | 24500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 4899 |
| policy_loss | 4.06e+08 |
| std | 0.981 |
| value_loss | 1.23e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 92 |
| iterations | 5000 |
| time_elapsed | 271 |
| total_timesteps | 25000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 4999 |
| policy_loss | 5.83e+08 |
| std | 0.981 |
| value_loss | 2.28e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4681265.59337581
Sharpe: 1.0250225579578573
=================================
-------------------------------------
| time/ | |
| fps | 92 |
| iterations | 5100 |
| time_elapsed | 276 |
| total_timesteps | 25500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 5099 |
| policy_loss | 1.99e+08 |
| std | 0.98 |
| value_loss | 2.34e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 92 |
| iterations | 5200 |
| time_elapsed | 282 |
| total_timesteps | 26000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 5199 |
| policy_loss | 2.38e+08 |
| std | 0.98 |
| value_loss | 3.92e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 5300 |
| time_elapsed | 287 |
| total_timesteps | 26500 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 5299 |
| policy_loss | 3.44e+08 |
| std | 0.979 |
| value_loss | 7.84e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 5400 |
| time_elapsed | 292 |
| total_timesteps | 27000 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 5399 |
| policy_loss | 4.1e+08 |
| std | 0.979 |
| value_loss | 1.21e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 5500 |
| time_elapsed | 297 |
| total_timesteps | 27500 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 5499 |
| policy_loss | 5.49e+08 |
| std | 0.978 |
| value_loss | 2.25e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4567132.020351956
Sharpe: 1.0193821127305023
=================================
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 5600 |
| time_elapsed | 303 |
| total_timesteps | 28000 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 5599 |
| policy_loss | 1.76e+08 |
| std | 0.978 |
| value_loss | 2.34e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 5700 |
| time_elapsed | 308 |
| total_timesteps | 28500 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 5699 |
| policy_loss | 2.42e+08 |
| std | 0.978 |
| value_loss | 4.26e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 5800 |
| time_elapsed | 313 |
| total_timesteps | 29000 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 5799 |
| policy_loss | 3.75e+08 |
| std | 0.977 |
| value_loss | 8.96e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 5900 |
| time_elapsed | 319 |
| total_timesteps | 29500 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 5899 |
| policy_loss | 4.23e+08 |
| std | 0.976 |
| value_loss | 1.26e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 6000 |
| time_elapsed | 324 |
| total_timesteps | 30000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 5999 |
| policy_loss | 6.72e+08 |
| std | 0.976 |
| value_loss | 2.56e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4900125.003411594
Sharpe: 1.0637641148305954
=================================
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 6100 |
| time_elapsed | 330 |
| total_timesteps | 30500 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 6099 |
| policy_loss | 1.94e+08 |
| std | 0.975 |
| value_loss | 2.42e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 6200 |
| time_elapsed | 335 |
| total_timesteps | 31000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 6199 |
| policy_loss | 2.33e+08 |
| std | 0.975 |
| value_loss | 4.13e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 92 |
| iterations | 6300 |
| time_elapsed | 340 |
| total_timesteps | 31500 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 6299 |
| policy_loss | 3.32e+08 |
| std | 0.975 |
| value_loss | 8.95e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 6400 |
| time_elapsed | 345 |
| total_timesteps | 32000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 6399 |
| policy_loss | 4.05e+08 |
| std | 0.975 |
| value_loss | 1.27e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 6500 |
| time_elapsed | 350 |
| total_timesteps | 32500 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 6499 |
| policy_loss | 6.9e+08 |
| std | 0.975 |
| value_loss | 3.01e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5225384.174262149
Sharpe: 1.104295113460634
=================================
-------------------------------------
| time/ | |
| fps | 92 |
| iterations | 6600 |
| time_elapsed | 356 |
| total_timesteps | 33000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 6599 |
| policy_loss | 1.74e+08 |
| std | 0.974 |
| value_loss | 2.04e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 92 |
| iterations | 6700 |
| time_elapsed | 361 |
| total_timesteps | 33500 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 6699 |
| policy_loss | 2.23e+08 |
| std | 0.973 |
| value_loss | 3.66e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 6800 |
| time_elapsed | 367 |
| total_timesteps | 34000 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 6799 |
| policy_loss | 3.32e+08 |
| std | 0.973 |
| value_loss | 7.43e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 6900 |
| time_elapsed | 372 |
| total_timesteps | 34500 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 6899 |
| policy_loss | 4.19e+08 |
| std | 0.973 |
| value_loss | 9.76e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 7000 |
| time_elapsed | 377 |
| total_timesteps | 35000 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 6999 |
| policy_loss | 6.31e+08 |
| std | 0.972 |
| value_loss | 2.56e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4809405.635984273
Sharpe: 1.0564943427802729
=================================
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 7100 |
| time_elapsed | 383 |
| total_timesteps | 35500 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 7099 |
| policy_loss | 1.53e+08 |
| std | 0.972 |
| value_loss | 1.83e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 7200 |
| time_elapsed | 388 |
| total_timesteps | 36000 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 7199 |
| policy_loss | 2.11e+08 |
| std | 0.972 |
| value_loss | 3.28e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 7300 |
| time_elapsed | 394 |
| total_timesteps | 36500 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 7299 |
| policy_loss | 3.36e+08 |
| std | 0.971 |
| value_loss | 7.37e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 92 |
| iterations | 7400 |
| time_elapsed | 399 |
| total_timesteps | 37000 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 7399 |
| policy_loss | 3.78e+08 |
| std | 0.97 |
| value_loss | 9.13e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 92 |
| iterations | 7500 |
| time_elapsed | 404 |
| total_timesteps | 37500 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 7499 |
| policy_loss | 5.93e+08 |
| std | 0.97 |
| value_loss | 2.22e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4522596.3556108475
Sharpe: 1.0151726143707411
=================================
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 7600 |
| time_elapsed | 410 |
| total_timesteps | 38000 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 7599 |
| policy_loss | 1.61e+08 |
| std | 0.971 |
| value_loss | 1.98e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 7700 |
| time_elapsed | 415 |
| total_timesteps | 38500 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 7699 |
| policy_loss | 2.26e+08 |
| std | 0.97 |
| value_loss | 2.97e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 7800 |
| time_elapsed | 420 |
| total_timesteps | 39000 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 7799 |
| policy_loss | 3.56e+08 |
| std | 0.97 |
| value_loss | 7.15e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 7900 |
| time_elapsed | 425 |
| total_timesteps | 39500 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | 2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 7899 |
| policy_loss | 3.49e+08 |
| std | 0.97 |
| value_loss | 1.01e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 8000 |
| time_elapsed | 431 |
| total_timesteps | 40000 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 7999 |
| policy_loss | 5.36e+08 |
| std | 0.969 |
| value_loss | 2.14e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4431857.750332673
Sharpe: 1.0036915590210185
=================================
-------------------------------------
| time/ | |
| fps | 92 |
| iterations | 8100 |
| time_elapsed | 436 |
| total_timesteps | 40500 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 8099 |
| policy_loss | 1.67e+08 |
| std | 0.969 |
| value_loss | 2.17e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 92 |
| iterations | 8200 |
| time_elapsed | 442 |
| total_timesteps | 41000 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 8199 |
| policy_loss | 2.15e+08 |
| std | 0.968 |
| value_loss | 3.25e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 8300 |
| time_elapsed | 447 |
| total_timesteps | 41500 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 8299 |
| policy_loss | 3.2e+08 |
| std | 0.968 |
| value_loss | 7.55e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 8400 |
| time_elapsed | 452 |
| total_timesteps | 42000 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 8399 |
| policy_loss | 3.93e+08 |
| std | 0.968 |
| value_loss | 1.14e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 8500 |
| time_elapsed | 457 |
| total_timesteps | 42500 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 8499 |
| policy_loss | 5.57e+08 |
| std | 0.967 |
| value_loss | 2.15e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4677225.075669589
Sharpe: 1.0421669164741074
=================================
-------------------------------------
| time/ | |
| fps | 92 |
| iterations | 8600 |
| time_elapsed | 463 |
| total_timesteps | 43000 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 8599 |
| policy_loss | 1.64e+08 |
| std | 0.966 |
| value_loss | 1.96e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 8700 |
| time_elapsed | 468 |
| total_timesteps | 43500 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 8699 |
| policy_loss | 1.93e+08 |
| std | 0.965 |
| value_loss | 3.01e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 8800 |
| time_elapsed | 474 |
| total_timesteps | 44000 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 8799 |
| policy_loss | 3.02e+08 |
| std | 0.964 |
| value_loss | 7.3e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 92 |
| iterations | 8900 |
| time_elapsed | 479 |
| total_timesteps | 44500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 8899 |
| policy_loss | 3.86e+08 |
| std | 0.964 |
| value_loss | 1.07e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 9000 |
| time_elapsed | 484 |
| total_timesteps | 45000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 8999 |
| policy_loss | 5.81e+08 |
| std | 0.964 |
| value_loss | 1.96e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4687361.780138577
Sharpe: 1.047264153324412
=================================
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 9100 |
| time_elapsed | 490 |
| total_timesteps | 45500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 9099 |
| policy_loss | 1.44e+08 |
| std | 0.963 |
| value_loss | 1.79e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 9200 |
| time_elapsed | 495 |
| total_timesteps | 46000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 9199 |
| policy_loss | 1.95e+08 |
| std | 0.963 |
| value_loss | 2.75e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 9300 |
| time_elapsed | 501 |
| total_timesteps | 46500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 9299 |
| policy_loss | 2.94e+08 |
| std | 0.962 |
| value_loss | 6.66e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 92 |
| iterations | 9400 |
| time_elapsed | 506 |
| total_timesteps | 47000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 9399 |
| policy_loss | 3.76e+08 |
| std | 0.961 |
| value_loss | 9.96e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 9500 |
| time_elapsed | 511 |
| total_timesteps | 47500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 9499 |
| policy_loss | 5.17e+08 |
| std | 0.961 |
| value_loss | 1.96e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4782206.641076664
Sharpe: 1.0539810305307713
=================================
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 9600 |
| time_elapsed | 517 |
| total_timesteps | 48000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 9599 |
| policy_loss | 1.5e+08 |
| std | 0.961 |
| value_loss | 1.64e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 92 |
| iterations | 9700 |
| time_elapsed | 522 |
| total_timesteps | 48500 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 9699 |
| policy_loss | 1.78e+08 |
| std | 0.96 |
| value_loss | 2.5e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 92 |
| iterations | 9800 |
| time_elapsed | 527 |
| total_timesteps | 49000 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 9799 |
| policy_loss | 3.04e+08 |
| std | 0.959 |
| value_loss | 6.54e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 9900 |
| time_elapsed | 532 |
| total_timesteps | 49500 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 9899 |
| policy_loss | 3.76e+08 |
| std | 0.959 |
| value_loss | 9.66e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 10000 |
| time_elapsed | 538 |
| total_timesteps | 50000 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 9999 |
| policy_loss | 4.74e+08 |
| std | 0.959 |
| value_loss | 1.89e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4762775.543790426
Sharpe: 1.0532360980699083
=================================
-------------------------------------
| time/ | |
| fps | 92 |
| iterations | 10100 |
| time_elapsed | 544 |
| total_timesteps | 50500 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 10099 |
| policy_loss | 1.48e+08 |
| std | 0.958 |
| value_loss | 1.7e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 10200 |
| time_elapsed | 549 |
| total_timesteps | 51000 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 10199 |
| policy_loss | 1.79e+08 |
| std | 0.957 |
| value_loss | 2.75e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 10300 |
| time_elapsed | 554 |
| total_timesteps | 51500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 10299 |
| policy_loss | 2.91e+08 |
| std | 0.957 |
| value_loss | 6.26e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 10400 |
| time_elapsed | 559 |
| total_timesteps | 52000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 10399 |
| policy_loss | 3.59e+08 |
| std | 0.956 |
| value_loss | 9.71e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 10500 |
| time_elapsed | 565 |
| total_timesteps | 52500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 10499 |
| policy_loss | 5.47e+08 |
| std | 0.956 |
| value_loss | 1.9e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4807763.879332
Sharpe: 1.0615705911859483
=================================
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 10600 |
| time_elapsed | 571 |
| total_timesteps | 53000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 10599 |
| policy_loss | 1.52e+08 |
| std | 0.956 |
| value_loss | 1.5e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 10700 |
| time_elapsed | 577 |
| total_timesteps | 53500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 10699 |
| policy_loss | 1.97e+08 |
| std | 0.956 |
| value_loss | 2.6e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 10800 |
| time_elapsed | 583 |
| total_timesteps | 54000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 10799 |
| policy_loss | 2.99e+08 |
| std | 0.955 |
| value_loss | 6.59e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 92 |
| iterations | 10900 |
| time_elapsed | 588 |
| total_timesteps | 54500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 10899 |
| policy_loss | 3.67e+08 |
| std | 0.955 |
| value_loss | 1.05e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 93 |
| iterations | 11000 |
| time_elapsed | 589 |
| total_timesteps | 55000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 10999 |
| policy_loss | 5.2e+08 |
| std | 0.955 |
| value_loss | 1.83e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4716781.109664239
Sharpe: 1.0426665143039704
=================================
-------------------------------------
| time/ | |
| fps | 93 |
| iterations | 11100 |
| time_elapsed | 591 |
| total_timesteps | 55500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 11099 |
| policy_loss | 1.5e+08 |
| std | 0.955 |
| value_loss | 1.47e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 94 |
| iterations | 11200 |
| time_elapsed | 593 |
| total_timesteps | 56000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 11199 |
| policy_loss | 2.13e+08 |
| std | 0.955 |
| value_loss | 3.06e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 94 |
| iterations | 11300 |
| time_elapsed | 595 |
| total_timesteps | 56500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 11299 |
| policy_loss | 2.96e+08 |
| std | 0.955 |
| value_loss | 6.37e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 95 |
| iterations | 11400 |
| time_elapsed | 596 |
| total_timesteps | 57000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 11399 |
| policy_loss | 3.79e+08 |
| std | 0.954 |
| value_loss | 1.08e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 96 |
| iterations | 11500 |
| time_elapsed | 598 |
| total_timesteps | 57500 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 11499 |
| policy_loss | 4.8e+08 |
| std | 0.954 |
| value_loss | 1.67e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4637607.498670973
Sharpe: 1.0335287851960007
=================================
-------------------------------------
| time/ | |
| fps | 96 |
| iterations | 11600 |
| time_elapsed | 600 |
| total_timesteps | 58000 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 11599 |
| policy_loss | 1.41e+08 |
| std | 0.954 |
| value_loss | 1.19e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 97 |
| iterations | 11700 |
| time_elapsed | 602 |
| total_timesteps | 58500 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 11699 |
| policy_loss | 2.07e+08 |
| std | 0.953 |
| value_loss | 2.99e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 97 |
| iterations | 11800 |
| time_elapsed | 603 |
| total_timesteps | 59000 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 1.79e-07 |
| learning_rate | 0.0002 |
| n_updates | 11799 |
| policy_loss | 2.74e+08 |
| std | 0.953 |
| value_loss | 5.67e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 98 |
| iterations | 11900 |
| time_elapsed | 605 |
| total_timesteps | 59500 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 11899 |
| policy_loss | 3.89e+08 |
| std | 0.951 |
| value_loss | 1.06e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 98 |
| iterations | 12000 |
| time_elapsed | 606 |
| total_timesteps | 60000 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 11999 |
| policy_loss | 4.77e+08 |
| std | 0.951 |
| value_loss | 1.63e+14 |
------------------------------------
###Markdown
Model 2: **PPO**
###Code
agent = DRLAgent(env = env_train)
PPO_PARAMS = {
"n_steps": 2048,
"ent_coef": 0.005,
"learning_rate": 0.0001,
"batch_size": 128,
}
model_ppo = agent.get_model("ppo",model_kwargs = PPO_PARAMS)
trained_ppo = agent.train_model(model=model_ppo,
tb_log_name='ppo',
total_timesteps=80000)
###Output
Logging to tensorboard_log/ppo/ppo_3
-----------------------------
| time/ | |
| fps | 458 |
| iterations | 1 |
| time_elapsed | 4 |
| total_timesteps | 2048 |
-----------------------------
=================================
begin_total_asset:1000000
end_total_asset:4917364.6278486075
Sharpe: 1.074414829116363
=================================
--------------------------------------------
| time/ | |
| fps | 391 |
| iterations | 2 |
| time_elapsed | 10 |
| total_timesteps | 4096 |
| train/ | |
| approx_kl | -7.8231096e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.71e+14 |
| learning_rate | 0.0001 |
| loss | 7.78e+14 |
| n_updates | 10 |
| policy_gradient_loss | -6.16e-07 |
| std | 1 |
| value_loss | 1.57e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4996331.100586685
Sharpe: 1.0890927964884638
=================================
--------------------------------------------
| time/ | |
| fps | 373 |
| iterations | 3 |
| time_elapsed | 16 |
| total_timesteps | 6144 |
| train/ | |
| approx_kl | -3.5390258e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -8.76e+14 |
| learning_rate | 0.0001 |
| loss | 1.1e+15 |
| n_updates | 20 |
| policy_gradient_loss | -4.29e-07 |
| std | 1 |
| value_loss | 2.33e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4751039.2878817525
Sharpe: 1.0560179406423764
=================================
--------------------------------------------
| time/ | |
| fps | 365 |
| iterations | 4 |
| time_elapsed | 22 |
| total_timesteps | 8192 |
| train/ | |
| approx_kl | -1.6763806e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -8.01e+15 |
| learning_rate | 0.0001 |
| loss | 1.25e+15 |
| n_updates | 30 |
| policy_gradient_loss | -5.58e-07 |
| std | 1 |
| value_loss | 2.59e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4769059.347696523
Sharpe: 1.056814654380227
=================================
--------------------------------------------
| time/ | |
| fps | 360 |
| iterations | 5 |
| time_elapsed | 28 |
| total_timesteps | 10240 |
| train/ | |
| approx_kl | -5.5879354e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -2.55e+16 |
| learning_rate | 0.0001 |
| loss | 1.24e+15 |
| n_updates | 40 |
| policy_gradient_loss | -4.9e-07 |
| std | 1 |
| value_loss | 2.7e+15 |
--------------------------------------------
--------------------------------------------
| time/ | |
| fps | 358 |
| iterations | 6 |
| time_elapsed | 34 |
| total_timesteps | 12288 |
| train/ | |
| approx_kl | 1.13621354e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -9.17e+16 |
| learning_rate | 0.0001 |
| loss | 1.35e+15 |
| n_updates | 50 |
| policy_gradient_loss | -4.28e-07 |
| std | 1 |
| value_loss | 2.77e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4816491.86007194
Sharpe: 1.0636199939613733
=================================
-------------------------------------------
| time/ | |
| fps | 356 |
| iterations | 7 |
| time_elapsed | 40 |
| total_timesteps | 14336 |
| train/ | |
| approx_kl | 3.5390258e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.42e+17 |
| learning_rate | 0.0001 |
| loss | 1.03e+15 |
| n_updates | 60 |
| policy_gradient_loss | -6.52e-07 |
| std | 1 |
| value_loss | 1.94e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4631919.83090099
Sharpe: 1.0396504731290799
=================================
-------------------------------------------
| time/ | |
| fps | 354 |
| iterations | 8 |
| time_elapsed | 46 |
| total_timesteps | 16384 |
| train/ | |
| approx_kl | 1.7508864e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.93e+17 |
| learning_rate | 0.0001 |
| loss | 9.83e+14 |
| n_updates | 70 |
| policy_gradient_loss | -5.78e-07 |
| std | 1 |
| value_loss | 2.06e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4728763.286321457
Sharpe: 1.052390302374202
=================================
-------------------------------------------
| time/ | |
| fps | 353 |
| iterations | 9 |
| time_elapsed | 52 |
| total_timesteps | 18432 |
| train/ | |
| approx_kl | 4.4703484e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.72e+18 |
| learning_rate | 0.0001 |
| loss | 1.25e+15 |
| n_updates | 80 |
| policy_gradient_loss | -4.84e-07 |
| std | 1 |
| value_loss | 2.33e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4439983.024798136
Sharpe: 1.013829383303325
=================================
--------------------------------------------
| time/ | |
| fps | 352 |
| iterations | 10 |
| time_elapsed | 58 |
| total_timesteps | 20480 |
| train/ | |
| approx_kl | -1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.7e+18 |
| learning_rate | 0.0001 |
| loss | 1.17e+15 |
| n_updates | 90 |
| policy_gradient_loss | -4.82e-07 |
| std | 1 |
| value_loss | 2.58e+15 |
--------------------------------------------
-------------------------------------------
| time/ | |
| fps | 352 |
| iterations | 11 |
| time_elapsed | 63 |
| total_timesteps | 22528 |
| train/ | |
| approx_kl | -9.313226e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.85e+18 |
| learning_rate | 0.0001 |
| loss | 1.2e+15 |
| n_updates | 100 |
| policy_gradient_loss | -5.2e-07 |
| std | 1 |
| value_loss | 2.51e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5048884.524536961
Sharpe: 1.0963911876706685
=================================
-------------------------------------------
| time/ | |
| fps | 351 |
| iterations | 12 |
| time_elapsed | 69 |
| total_timesteps | 24576 |
| train/ | |
| approx_kl | 3.7252903e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -2.67e+18 |
| learning_rate | 0.0001 |
| loss | 1.44e+15 |
| n_updates | 110 |
| policy_gradient_loss | -4.53e-07 |
| std | 1 |
| value_loss | 2.8e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4824229.456193555
Sharpe: 1.0648549464252506
=================================
-------------------------------------------
| time/ | |
| fps | 351 |
| iterations | 13 |
| time_elapsed | 75 |
| total_timesteps | 26624 |
| train/ | |
| approx_kl | 3.3527613e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.38e+18 |
| learning_rate | 0.0001 |
| loss | 7.89e+14 |
| n_updates | 120 |
| policy_gradient_loss | -6.06e-07 |
| std | 1 |
| value_loss | 1.76e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4602974.615591427
Sharpe: 1.034753433280377
=================================
-------------------------------------------
| time/ | |
| fps | 350 |
| iterations | 14 |
| time_elapsed | 81 |
| total_timesteps | 28672 |
| train/ | |
| approx_kl | 8.8475645e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.75e+19 |
| learning_rate | 0.0001 |
| loss | 1.23e+15 |
| n_updates | 130 |
| policy_gradient_loss | -5.8e-07 |
| std | 1 |
| value_loss | 2.27e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4608422.583401322
Sharpe: 1.035300880612428
=================================
-------------------------------------------
| time/ | |
| fps | 349 |
| iterations | 15 |
| time_elapsed | 87 |
| total_timesteps | 30720 |
| train/ | |
| approx_kl | 1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.71e+18 |
| learning_rate | 0.0001 |
| loss | 1.22e+15 |
| n_updates | 140 |
| policy_gradient_loss | -5.63e-07 |
| std | 1 |
| value_loss | 2.39e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4826869.636472441
Sharpe: 1.0676330284861433
=================================
--------------------------------------------
| time/ | |
| fps | 348 |
| iterations | 16 |
| time_elapsed | 94 |
| total_timesteps | 32768 |
| train/ | |
| approx_kl | -1.4901161e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.51e+19 |
| learning_rate | 0.0001 |
| loss | 1.22e+15 |
| n_updates | 150 |
| policy_gradient_loss | -5.78e-07 |
| std | 1 |
| value_loss | 2.7e+15 |
--------------------------------------------
-------------------------------------------
| time/ | |
| fps | 346 |
| iterations | 17 |
| time_elapsed | 100 |
| total_timesteps | 34816 |
| train/ | |
| approx_kl | -5.401671e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.48e+19 |
| learning_rate | 0.0001 |
| loss | 1.48e+15 |
| n_updates | 160 |
| policy_gradient_loss | -3.96e-07 |
| std | 1 |
| value_loss | 2.81e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4364006.929301854
Sharpe: 1.002176631256902
=================================
--------------------------------------------
| time/ | |
| fps | 345 |
| iterations | 18 |
| time_elapsed | 106 |
| total_timesteps | 36864 |
| train/ | |
| approx_kl | -1.0803342e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.15e+19 |
| learning_rate | 0.0001 |
| loss | 8.41e+14 |
| n_updates | 170 |
| policy_gradient_loss | -4.91e-07 |
| std | 1 |
| value_loss | 1.58e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4796634.5596691
Sharpe: 1.0678319491053092
=================================
--------------------------------------------
| time/ | |
| fps | 344 |
| iterations | 19 |
| time_elapsed | 112 |
| total_timesteps | 38912 |
| train/ | |
| approx_kl | -1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -4.21e+19 |
| learning_rate | 0.0001 |
| loss | 1.03e+15 |
| n_updates | 180 |
| policy_gradient_loss | -5.6e-07 |
| std | 1 |
| value_loss | 2.02e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4969786.413399254
Sharpe: 1.0823021486710163
=================================
--------------------------------------------
| time/ | |
| fps | 344 |
| iterations | 20 |
| time_elapsed | 118 |
| total_timesteps | 40960 |
| train/ | |
| approx_kl | -6.7055225e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.41e+19 |
| learning_rate | 0.0001 |
| loss | 1.22e+15 |
| n_updates | 190 |
| policy_gradient_loss | -2.87e-07 |
| std | 1 |
| value_loss | 2.4e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4885480.801922398
Sharpe: 1.0729451877791811
=================================
--------------------------------------------
| time/ | |
| fps | 343 |
| iterations | 21 |
| time_elapsed | 125 |
| total_timesteps | 43008 |
| train/ | |
| approx_kl | -5.5879354e-09 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.85e+19 |
| learning_rate | 0.0001 |
| loss | 1.62e+15 |
| n_updates | 200 |
| policy_gradient_loss | -5.24e-07 |
| std | 1 |
| value_loss | 2.95e+15 |
--------------------------------------------
-------------------------------------------
| time/ | |
| fps | 343 |
| iterations | 22 |
| time_elapsed | 131 |
| total_timesteps | 45056 |
| train/ | |
| approx_kl | 1.8067658e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.01e+19 |
| learning_rate | 0.0001 |
| loss | 1.34e+15 |
| n_updates | 210 |
| policy_gradient_loss | -4.62e-07 |
| std | 1 |
| value_loss | 2.93e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5613709.009268909
Sharpe: 1.1673870008513114
=================================
--------------------------------------------
| time/ | |
| fps | 342 |
| iterations | 23 |
| time_elapsed | 137 |
| total_timesteps | 47104 |
| train/ | |
| approx_kl | -2.0489097e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.72e+19 |
| learning_rate | 0.0001 |
| loss | 1.41e+15 |
| n_updates | 220 |
| policy_gradient_loss | -4.78e-07 |
| std | 1 |
| value_loss | 2.71e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5043800.590470289
Sharpe: 1.0953673306850924
=================================
-------------------------------------------
| time/ | |
| fps | 342 |
| iterations | 24 |
| time_elapsed | 143 |
| total_timesteps | 49152 |
| train/ | |
| approx_kl | 2.4214387e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.37e+20 |
| learning_rate | 0.0001 |
| loss | 1.01e+15 |
| n_updates | 230 |
| policy_gradient_loss | -5.28e-07 |
| std | 1 |
| value_loss | 2.26e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4776576.852863929
Sharpe: 1.0593811754233755
=================================
-------------------------------------------
| time/ | |
| fps | 342 |
| iterations | 25 |
| time_elapsed | 149 |
| total_timesteps | 51200 |
| train/ | |
| approx_kl | 4.4703484e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.27e+20 |
| learning_rate | 0.0001 |
| loss | 1.21e+15 |
| n_updates | 240 |
| policy_gradient_loss | -4.82e-07 |
| std | 1 |
| value_loss | 2.46e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4468393.200157898
Sharpe: 1.0192746589767419
=================================
-------------------------------------------
| time/ | |
| fps | 341 |
| iterations | 26 |
| time_elapsed | 156 |
| total_timesteps | 53248 |
| train/ | |
| approx_kl | 2.6077032e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.96e+20 |
| learning_rate | 0.0001 |
| loss | 1.31e+15 |
| n_updates | 250 |
| policy_gradient_loss | -5.36e-07 |
| std | 1 |
| value_loss | 2.59e+15 |
-------------------------------------------
--------------------------------------------
| time/ | |
| fps | 341 |
| iterations | 27 |
| time_elapsed | 162 |
| total_timesteps | 55296 |
| train/ | |
| approx_kl | -1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.68e+20 |
| learning_rate | 0.0001 |
| loss | 1.33e+15 |
| n_updates | 260 |
| policy_gradient_loss | -3.77e-07 |
| std | 1 |
| value_loss | 2.51e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4875234.39450474
Sharpe: 1.0721137742534572
=================================
--------------------------------------------
| time/ | |
| fps | 340 |
| iterations | 28 |
| time_elapsed | 168 |
| total_timesteps | 57344 |
| train/ | |
| approx_kl | -1.2479722e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.66e+20 |
| learning_rate | 0.0001 |
| loss | 1.59e+15 |
| n_updates | 270 |
| policy_gradient_loss | -4.61e-07 |
| std | 1 |
| value_loss | 2.8e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4600459.210918712
Sharpe: 1.034756153745345
=================================
-------------------------------------------
| time/ | |
| fps | 340 |
| iterations | 29 |
| time_elapsed | 174 |
| total_timesteps | 59392 |
| train/ | |
| approx_kl | -4.284084e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.26e+20 |
| learning_rate | 0.0001 |
| loss | 8.07e+14 |
| n_updates | 280 |
| policy_gradient_loss | -5.44e-07 |
| std | 1 |
| value_loss | 1.62e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4526188.381438201
Sharpe: 1.0293846869900876
=================================
--------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 30 |
| time_elapsed | 180 |
| total_timesteps | 61440 |
| train/ | |
| approx_kl | -2.4214387e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.44e+20 |
| learning_rate | 0.0001 |
| loss | 1.12e+15 |
| n_updates | 290 |
| policy_gradient_loss | -5.65e-07 |
| std | 1 |
| value_loss | 2.1e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4487836.803716703
Sharpe: 1.010974660894394
=================================
--------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 31 |
| time_elapsed | 187 |
| total_timesteps | 63488 |
| train/ | |
| approx_kl | -2.6077032e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -4.47e+20 |
| learning_rate | 0.0001 |
| loss | 1.14e+15 |
| n_updates | 300 |
| policy_gradient_loss | -4.8e-07 |
| std | 1 |
| value_loss | 2.25e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4480729.650671386
Sharpe: 1.0219085518652522
=================================
--------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 32 |
| time_elapsed | 193 |
| total_timesteps | 65536 |
| train/ | |
| approx_kl | -2.0302832e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.87e+20 |
| learning_rate | 0.0001 |
| loss | 1.28e+15 |
| n_updates | 310 |
| policy_gradient_loss | -4.4e-07 |
| std | 1 |
| value_loss | 2.51e+15 |
--------------------------------------------
------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 33 |
| time_elapsed | 199 |
| total_timesteps | 67584 |
| train/ | |
| approx_kl | 1.359731e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.68e+20 |
| learning_rate | 0.0001 |
| loss | 1.24e+15 |
| n_updates | 320 |
| policy_gradient_loss | -4.51e-07 |
| std | 1 |
| value_loss | 2.66e+15 |
------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4399373.734699048
Sharpe: 1.005407087483561
=================================
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 34 |
| time_elapsed | 205 |
| total_timesteps | 69632 |
| train/ | |
| approx_kl | 2.2351742e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -2.29e+20 |
| learning_rate | 0.0001 |
| loss | 8.5e+14 |
| n_updates | 330 |
| policy_gradient_loss | -5.56e-07 |
| std | 1 |
| value_loss | 1.64e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4305742.921261859
Sharpe: 0.9945061913961891
=================================
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 35 |
| time_elapsed | 211 |
| total_timesteps | 71680 |
| train/ | |
| approx_kl | 1.3411045e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.11e+20 |
| learning_rate | 0.0001 |
| loss | 7.97e+14 |
| n_updates | 340 |
| policy_gradient_loss | -6.48e-07 |
| std | 1 |
| value_loss | 1.8e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4794175.629957249
Sharpe: 1.0611635246548963
=================================
--------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 36 |
| time_elapsed | 217 |
| total_timesteps | 73728 |
| train/ | |
| approx_kl | -3.3527613e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.16e+21 |
| learning_rate | 0.0001 |
| loss | 1.07e+15 |
| n_updates | 350 |
| policy_gradient_loss | -4.82e-07 |
| std | 1 |
| value_loss | 2.06e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4467487.416264421
Sharpe: 1.021012208464475
=================================
------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 37 |
| time_elapsed | 224 |
| total_timesteps | 75776 |
| train/ | |
| approx_kl | 5.401671e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -9.89e+20 |
| learning_rate | 0.0001 |
| loss | 1.46e+15 |
| n_updates | 360 |
| policy_gradient_loss | -4.78e-07 |
| std | 1 |
| value_loss | 2.75e+15 |
------------------------------------------
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 38 |
| time_elapsed | 229 |
| total_timesteps | 77824 |
| train/ | |
| approx_kl | 1.6763806e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.64e+20 |
| learning_rate | 0.0001 |
| loss | 1.25e+15 |
| n_updates | 370 |
| policy_gradient_loss | -4.54e-07 |
| std | 1 |
| value_loss | 2.57e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4806649.219027834
Sharpe: 1.0604486398186765
=================================
------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 39 |
| time_elapsed | 236 |
| total_timesteps | 79872 |
| train/ | |
| approx_kl | 4.284084e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.96e+20 |
| learning_rate | 0.0001 |
| loss | 1.28e+15 |
| n_updates | 380 |
| policy_gradient_loss | -5.9e-07 |
| std | 1 |
| value_loss | 2.44e+15 |
------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4653147.508966551
Sharpe: 1.043189911078732
=================================
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 40 |
| time_elapsed | 242 |
| total_timesteps | 81920 |
| train/ | |
| approx_kl | 6.3329935e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.04e+21 |
| learning_rate | 0.0001 |
| loss | 1.01e+15 |
| n_updates | 390 |
| policy_gradient_loss | -5.33e-07 |
| std | 1 |
| value_loss | 1.82e+15 |
-------------------------------------------
###Markdown
Model 3: **DDPG**
###Code
agent = DRLAgent(env = env_train)
DDPG_PARAMS = {"batch_size": 128, "buffer_size": 50000, "learning_rate": 0.001}
model_ddpg = agent.get_model("ddpg",model_kwargs = DDPG_PARAMS)
trained_ddpg = agent.train_model(model=model_ddpg,
tb_log_name='ddpg',
total_timesteps=50000)
###Output
Logging to tensorboard_log/ddpg/ddpg_2
=================================
begin_total_asset:1000000
end_total_asset:4625995.900359718
Sharpe: 1.040202670783119
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 4 |
| fps | 22 |
| time_elapsed | 439 |
| total timesteps | 10064 |
| train/ | |
| actor_loss | -6.99e+07 |
| critic_loss | 7.27e+12 |
| learning_rate | 0.001 |
| n_updates | 7548 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 8 |
| fps | 20 |
| time_elapsed | 980 |
| total timesteps | 20128 |
| train/ | |
| actor_loss | -1.44e+08 |
| critic_loss | 1.81e+13 |
| learning_rate | 0.001 |
| n_updates | 17612 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 12 |
| fps | 19 |
| time_elapsed | 1542 |
| total timesteps | 30192 |
| train/ | |
| actor_loss | -1.88e+08 |
| critic_loss | 2.72e+13 |
| learning_rate | 0.001 |
| n_updates | 27676 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 16 |
| fps | 18 |
| time_elapsed | 2133 |
| total timesteps | 40256 |
| train/ | |
| actor_loss | -2.15e+08 |
| critic_loss | 3.45e+13 |
| learning_rate | 0.001 |
| n_updates | 37740 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
---------------------------------
| time/ | |
| episodes | 20 |
| fps | 17 |
| time_elapsed | 2874 |
| total timesteps | 50320 |
| train/ | |
| actor_loss | -2.3e+08 |
| critic_loss | 4.05e+13 |
| learning_rate | 0.001 |
| n_updates | 47804 |
---------------------------------
###Markdown
Model 4: **SAC**
###Code
agent = DRLAgent(env = env_train)
SAC_PARAMS = {
"batch_size": 128,
"buffer_size": 100000,
"learning_rate": 0.0003,
"learning_starts": 100,
"ent_coef": "auto_0.1",
}
model_sac = agent.get_model("sac",model_kwargs = SAC_PARAMS)
trained_sac = agent.train_model(model=model_sac,
tb_log_name='sac',
total_timesteps=50000)
###Output
Logging to tensorboard_log/sac/sac_1
=================================
begin_total_asset:1000000
end_total_asset:4449463.498168942
Sharpe: 1.01245667390232
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418643.239765096
Sharpe: 1.0135796594260282
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418644.1960784905
Sharpe: 1.0135797537524718
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418659.429680678
Sharpe: 1.013581852537709
=================================
----------------------------------
| time/ | |
| episodes | 4 |
| fps | 12 |
| time_elapsed | 783 |
| total timesteps | 10064 |
| train/ | |
| actor_loss | -8.83e+07 |
| critic_loss | 6.57e+12 |
| ent_coef | 2.24 |
| ent_coef_loss | -205 |
| learning_rate | 0.0003 |
| n_updates | 9963 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4418651.576406099
Sharpe: 1.013581224026754
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418670.948269031
Sharpe: 1.0135838030234754
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418682.278829884
Sharpe: 1.013585596968056
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418791.911955293
Sharpe: 1.0136007328171013
=================================
----------------------------------
| time/ | |
| episodes | 8 |
| fps | 12 |
| time_elapsed | 1585 |
| total timesteps | 20128 |
| train/ | |
| actor_loss | -1.51e+08 |
| critic_loss | 1.12e+13 |
| ent_coef | 41.7 |
| ent_coef_loss | -670 |
| learning_rate | 0.0003 |
| n_updates | 20027 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4418737.365107464
Sharpe: 1.0135970410224868
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418754.895735274
Sharpe: 1.0135965589029627
=================================
=================================
begin_total_asset:1000000
end_total_asset:4419325.814567342
Sharpe: 1.0136807224228588
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418142.473513333
Sharpe: 1.0135234795926031
=================================
----------------------------------
| time/ | |
| episodes | 12 |
| fps | 12 |
| time_elapsed | 2400 |
| total timesteps | 30192 |
| train/ | |
| actor_loss | -1.85e+08 |
| critic_loss | 1.87e+13 |
| ent_coef | 725 |
| ent_coef_loss | -673 |
| learning_rate | 0.0003 |
| n_updates | 30091 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4422046.188863339
Sharpe: 1.0140936726052256
=================================
=================================
begin_total_asset:1000000
end_total_asset:4424919.463828854
Sharpe: 1.014521127041106
=================================
=================================
begin_total_asset:1000000
end_total_asset:4427483.152494239
Sharpe: 1.0148626804754584
=================================
=================================
begin_total_asset:1000000
end_total_asset:4460697.650185859
Sharpe: 1.019852362102548
=================================
----------------------------------
| time/ | |
| episodes | 16 |
| fps | 12 |
| time_elapsed | 3210 |
| total timesteps | 40256 |
| train/ | |
| actor_loss | -1.93e+08 |
| critic_loss | 1.62e+13 |
| ent_coef | 1.01e+04 |
| ent_coef_loss | -238 |
| learning_rate | 0.0003 |
| n_updates | 40155 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4434035.982803257
Sharpe: 1.0161512551319891
=================================
=================================
begin_total_asset:1000000
end_total_asset:4454728.906041551
Sharpe: 1.018484863448905
=================================
=================================
begin_total_asset:1000000
end_total_asset:4475667.120269234
Sharpe: 1.0215545521682856
=================================
###Markdown
Trading
Assume that we have $1,000,000 initial capital at 2019-01-01. We use the DDPG model to trade Dow jones 30 stocks.
###Code
trade = data_split(df,'2019-01-01', '2021-01-01')
e_trade_gym = StockPortfolioEnv(df = trade, **env_kwargs)
trade.shape
df_daily_return, df_actions = DRLAgent.DRL_prediction(model=trained_a2c,
environment = e_trade_gym)
df_daily_return.head()
df_actions.head()
df_actions.to_csv('df_actions.csv')
###Output
_____no_output_____
###Markdown
Part 7: Backtest Our Strategy
Backtesting plays a key role in evaluating the performance of a trading strategy. Automated backtesting tool is preferred because it reduces the human error. We usually use the Quantopian pyfolio package to backtest our trading strategies. It is easy to use and consists of various individual plots that provide a comprehensive image of the performance of a trading strategy.
7.1 BackTestStats
pass in df_account_value, this information is stored in env class
###Code
from pyfolio import timeseries
DRL_strat = convert_daily_return_to_pyfolio_ts(df_daily_return)
perf_func = timeseries.perf_stats
perf_stats_all = perf_func( returns=DRL_strat,
factor_returns=DRL_strat,
positions=None, transactions=None, turnover_denom="AGB")
print("==============DRL Strategy Stats===========")
perf_stats_all
###Output
==============DRL Strategy Stats===========
###Markdown
7.2 BackTestPlot
###Code
import pyfolio
%matplotlib inline
baseline_df = get_baseline(
ticker='^DJI', start='2019-01-01', end='2021-01-01'
)
baseline_returns = get_daily_return(baseline_df, value_col_name="close")
with pyfolio.plotting.plotting_context(font_scale=1.1):
pyfolio.create_full_tear_sheet(returns = DRL_strat,
benchmark_rets=baseline_returns, set_context=False)
###Output
[*********************100%***********************] 1 of 1 completed
Shape of DataFrame: (506, 8)
###Markdown
Deep Reinforcement Learning for Stock Trading from Scratch: Portfolio AllocationTutorials to use OpenAI DRL to perform portfolio allocation in one Jupyter Notebook | Presented at NeurIPS 2020: Deep RL Workshop* This blog is based on our paper: FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance, presented at NeurIPS 2020: Deep RL Workshop.* Check out medium blog for detailed explanations: * Please report any issues to our Github: https://github.com/AI4Finance-LLC/FinRL-Library/issues* **Pytorch Version** Content * [1. Problem Definition](0)* [2. Getting Started - Load Python packages](1) * [2.1. Install Packages](1.1) * [2.2. Check Additional Packages](1.2) * [2.3. Import Packages](1.3) * [2.4. Create Folders](1.4)* [3. Download Data](2)* [4. Preprocess Data](3) * [4.1. Technical Indicators](3.1) * [4.2. Perform Feature Engineering](3.2)* [5.Build Environment](4) * [5.1. Training & Trade Data Split](4.1) * [5.2. User-defined Environment](4.2) * [5.3. Initialize Environment](4.3) * [6.Implement DRL Algorithms](5) * [7.Backtesting Performance](6) * [7.1. BackTestStats](6.1) * [7.2. BackTestPlot](6.2) * [7.3. Baseline Stats](6.3) * [7.3. Compare to Stock Market Index](6.4) Part 1. Problem Definition This problem is to design an automated trading solution for single stock trading. We model the stock trading process as a Markov Decision Process (MDP). We then formulate our trading goal as a maximization problem.The algorithm is trained using Deep Reinforcement Learning (DRL) algorithms and the components of the reinforcement learning environment are:* Action: The action space describes the allowed actions that the agent interacts with theenvironment. Normally, a ∈ A includes three actions: a ∈ {−1, 0, 1}, where −1, 0, 1 representselling, holding, and buying one stock. Also, an action can be carried upon multiple shares. We usean action space {−k, ..., −1, 0, 1, ..., k}, where k denotes the number of shares. For example, "Buy10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or −10, respectively* Reward function: r(s, a, s′) is the incentive mechanism for an agent to learn a better action. The change of the portfolio value when action a is taken at state s and arriving at new state s', i.e., r(s, a, s′) = v′ − v, where v′ and v represent the portfoliovalues at state s′ and s, respectively* State: The state space describes the observations that the agent receives from the environment. Just as a human trader needs to analyze various information before executing a trade, soour trading agent observes many different features to better learn in an interactive environment.* Environment: Dow 30 consituentsThe data of the single stock that we will be using for this case study is obtained from Yahoo Finance API. The data contains Open-High-Low-Close price and volume. Part 2. Getting Started- Load Python Packages 2.1. Install all the packages through FinRL library
###Code
## install finrl library
!pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
###Output
Collecting git+https://github.com/AI4Finance-LLC/FinRL-Library.git
Cloning https://github.com/AI4Finance-LLC/FinRL-Library.git to /tmp/pip-req-build-4ebj8idf
Running command git clone -q https://github.com/AI4Finance-LLC/FinRL-Library.git /tmp/pip-req-build-4ebj8idf
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (1.19.5)
Requirement already satisfied: pandas>=1.1.5 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (1.1.5)
Collecting stockstats
Downloading https://files.pythonhosted.org/packages/32/41/d3828c5bc0a262cb3112a4024108a3b019c183fa3b3078bff34bf25abf91/stockstats-0.3.2-py2.py3-none-any.whl
Collecting yfinance
Downloading https://files.pythonhosted.org/packages/5e/4e/88d31f5509edcbc51bcbb7eeae72516b17ada1bc2ad5b496e2d05d62c696/yfinance-0.1.60.tar.gz
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (3.2.2)
Requirement already satisfied: scikit-learn>=0.21.0 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (0.22.2.post1)
Requirement already satisfied: gym>=0.17 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (0.17.3)
Collecting stable-baselines3[extra]
[?25l Downloading https://files.pythonhosted.org/packages/c2/ce/5002282b316703191b9e7f7f1be03670f6a0d5e88181366e73d98d630f59/stable_baselines3-1.1.0-py3-none-any.whl (172kB)
[K |████████████████████████████████| 174kB 8.6MB/s
[?25hRequirement already satisfied: pytest in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (3.6.4)
Requirement already satisfied: setuptools>=41.4.0 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (57.0.0)
Requirement already satisfied: wheel>=0.33.6 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (0.36.2)
Collecting pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2
Cloning https://github.com/quantopian/pyfolio.git to /tmp/pip-install-str4s854/pyfolio
Running command git clone -q https://github.com/quantopian/pyfolio.git /tmp/pip-install-str4s854/pyfolio
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=1.1.5->finrl==0.3.0) (2018.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=1.1.5->finrl==0.3.0) (2.8.1)
Collecting int-date>=0.1.7
Downloading https://files.pythonhosted.org/packages/43/27/31803df15173ab341fe7548c14154b54227dfd8f630daa09a1c6e7db52f7/int_date-0.1.8-py2.py3-none-any.whl
Requirement already satisfied: requests>=2.20 in /usr/local/lib/python3.7/dist-packages (from yfinance->finrl==0.3.0) (2.23.0)
Requirement already satisfied: multitasking>=0.0.7 in /usr/local/lib/python3.7/dist-packages (from yfinance->finrl==0.3.0) (0.0.9)
Collecting lxml>=4.5.1
[?25l Downloading https://files.pythonhosted.org/packages/30/c0/d0526314971fc661b083ab135747dc68446a3022686da8c16d25fcf6ef07/lxml-4.6.3-cp37-cp37m-manylinux2014_x86_64.whl (6.3MB)
[K |████████████████████████████████| 6.3MB 29.9MB/s
[?25hRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.3.0) (1.3.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.3.0) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.3.0) (2.4.7)
Requirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.0->finrl==0.3.0) (1.4.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.0->finrl==0.3.0) (1.0.1)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym>=0.17->finrl==0.3.0) (1.3.0)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym>=0.17->finrl==0.3.0) (1.5.0)
Requirement already satisfied: torch>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.0) (1.9.0+cu102)
Requirement already satisfied: psutil; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.0) (5.4.8)
Requirement already satisfied: pillow; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.0) (7.1.2)
Requirement already satisfied: tensorboard>=2.2.0; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.0) (2.5.0)
Requirement already satisfied: opencv-python; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.0) (4.1.2.30)
Requirement already satisfied: atari-py~=0.2.0; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.0) (0.2.9)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.0) (1.15.0)
Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.0) (1.10.0)
Requirement already satisfied: pluggy<0.8,>=0.5 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.0) (0.7.1)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.0) (21.2.0)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.0) (8.8.0)
Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.0) (1.4.0)
Requirement already satisfied: ipython>=3.2.3 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (5.5.0)
Requirement already satisfied: seaborn>=0.7.1 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.11.1)
Collecting empyrical>=0.5.0
[?25l Downloading https://files.pythonhosted.org/packages/74/43/1b997c21411c6ab7c96dc034e160198272c7a785aeea7654c9bcf98bec83/empyrical-0.5.5.tar.gz (52kB)
[K |████████████████████████████████| 61kB 7.7MB/s
[?25hRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.3.0) (2021.5.30)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.3.0) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.3.0) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.3.0) (2.10)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym>=0.17->finrl==0.3.0) (0.16.0)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch>=1.4.0->stable-baselines3[extra]->finrl==0.3.0) (3.7.4.3)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (0.6.1)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (1.34.1)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (1.8.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (1.31.0)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (0.12.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (1.0.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (3.3.4)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (3.12.4)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (0.4.4)
Requirement already satisfied: pygments in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (2.6.1)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.7.5)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (5.0.5)
Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (4.8.0)
Requirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (4.4.2)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.8.1)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (1.0.18)
Requirement already satisfied: pandas-datareader>=0.2 in /usr/local/lib/python3.7/dist-packages (from empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.9.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (0.2.8)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3.6" in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (4.7.2)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (4.2.2)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (4.5.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (1.3.0)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.7/dist-packages (from traitlets>=4.2->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.2.0)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.7/dist-packages (from pexpect; sys_platform != "win32"->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.7.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.2.5)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (0.4.8)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (3.4.1)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (3.1.1)
Building wheels for collected packages: finrl, yfinance, pyfolio, empyrical
Building wheel for finrl (setup.py) ... [?25l[?25hdone
Created wheel for finrl: filename=finrl-0.3.0-cp37-none-any.whl size=39029 sha256=b5e0e12e95b4121b93cd651c6235b56c1f0036b8cd5fe4282ed341b89b704b71
Stored in directory: /tmp/pip-ephem-wheel-cache-81fatlyd/wheels/9c/19/bf/c644def96612df1ad42c94d5304966797eaa3221dffc5efe0b
Building wheel for yfinance (setup.py) ... [?25l[?25hdone
Created wheel for yfinance: filename=yfinance-0.1.60-py2.py3-none-any.whl size=23819 sha256=d46d05a6ce39a7748caf23509013c8d4f9dbd94a5df0367083a19f5756645a42
Stored in directory: /root/.cache/pip/wheels/f0/be/a4/846f02c5985562250917b0ab7b33fff737c8e6e8cd5209aa3b
Building wheel for pyfolio (setup.py) ... [?25l[?25hdone
Created wheel for pyfolio: filename=pyfolio-0.9.2+75.g4b901f6-cp37-none-any.whl size=75776 sha256=11761329471b373c2c50b45eb52816dfdf975ce01142cdb4f4a774fab19b7a13
Stored in directory: /tmp/pip-ephem-wheel-cache-81fatlyd/wheels/43/ce/d9/6752fb6e03205408773235435205a0519d2c608a94f1976e56
Building wheel for empyrical (setup.py) ... [?25l[?25hdone
Created wheel for empyrical: filename=empyrical-0.5.5-cp37-none-any.whl size=39780 sha256=2b4515c7d9f959d16244e3b5a89dad1452b2bbc5e8b309d5e43e128f2e87a888
Stored in directory: /root/.cache/pip/wheels/ea/b2/c8/6769d8444d2f2e608fae2641833110668d0ffd1abeb2e9f3fc
Successfully built finrl yfinance pyfolio empyrical
Installing collected packages: int-date, stockstats, lxml, yfinance, stable-baselines3, empyrical, pyfolio, finrl
Found existing installation: lxml 4.2.6
Uninstalling lxml-4.2.6:
Successfully uninstalled lxml-4.2.6
Successfully installed empyrical-0.5.5 finrl-0.3.0 int-date-0.1.8 lxml-4.6.3 pyfolio-0.9.2+75.g4b901f6 stable-baselines3-1.1.0 stockstats-0.3.2 yfinance-0.1.60
###Markdown
2.2. Check if the additional packages needed are present, if not install them. * Yahoo Finance API* pandas* numpy* matplotlib* stockstats* OpenAI gym* stable-baselines* tensorflow* pyfolio 2.3. Import Packages
###Code
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.use('Agg')
import datetime
from finrl.config import config
from finrl.marketdata.yahoodownloader import YahooDownloader
from finrl.preprocessing.preprocessors import FeatureEngineer
from finrl.preprocessing.data import data_split
from finrl.env.env_portfolio import StockPortfolioEnv
from finrl.model.models import DRLAgent
from finrl.trade.backtest import backtest_stats, backtest_plot, get_daily_return, get_baseline,convert_daily_return_to_pyfolio_ts
import sys
sys.path.append("../FinRL-Library")
###Output
/opt/conda/lib/python3.6/site-packages/pyfolio/pos.py:27: UserWarning: Module "zipline.assets" not found; multipliers will not be applied to position notionals.
'Module "zipline.assets" not found; multipliers will not be applied'
###Markdown
2.4. Create Folders
###Code
import os
if not os.path.exists("./" + config.DATA_SAVE_DIR):
os.makedirs("./" + config.DATA_SAVE_DIR)
if not os.path.exists("./" + config.TRAINED_MODEL_DIR):
os.makedirs("./" + config.TRAINED_MODEL_DIR)
if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR):
os.makedirs("./" + config.TENSORBOARD_LOG_DIR)
if not os.path.exists("./" + config.RESULTS_DIR):
os.makedirs("./" + config.RESULTS_DIR)
###Output
_____no_output_____
###Markdown
Part 3. Download DataYahoo Finance is a website that provides stock data, financial news, financial reports, etc. All the data provided by Yahoo Finance is free.* FinRL uses a class **YahooDownloader** to fetch data from Yahoo Finance API* Call Limit: Using the Public API (without authentication), you are limited to 2,000 requests per hour per IP (or up to a total of 48,000 requests a day).
###Code
print(config.DOW_30_TICKER)
df = YahooDownloader(start_date = '2008-01-01',
end_date = '2021-07-01',
ticker_list = config.DOW_30_TICKER).fetch_data()
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Part 4: Preprocess DataData preprocessing is a crucial step for training a high quality machine learning model. We need to check for missing data and do feature engineering in order to convert the data into a model-ready state.* Add technical indicators. In practical trading, various information needs to be taken into account, for example the historical stock prices, current holding shares, technical indicators, etc. In this article, we demonstrate two trend-following technical indicators: MACD and RSI.* Add turbulence index. Risk-aversion reflects whether an investor will choose to preserve the capital. It also influences one's trading strategy when facing different market volatility level. To control the risk in a worst-case scenario, such as financial crisis of 2007–2008, FinRL employs the financial turbulence index that measures extreme asset price fluctuation.
###Code
fe = FeatureEngineer(
use_technical_indicator=True,
use_turbulence=False,
user_defined_feature = False)
df = fe.preprocess_data(df)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Add covariance matrix as states
###Code
# add covariance matrix as states
df=df.sort_values(['date','tic'],ignore_index=True)
df.index = df.date.factorize()[0]
cov_list = []
return_list = []
# look back is one year
lookback=252
for i in range(lookback,len(df.index.unique())):
data_lookback = df.loc[i-lookback:i,:]
price_lookback=data_lookback.pivot_table(index = 'date',columns = 'tic', values = 'close')
return_lookback = price_lookback.pct_change().dropna()
return_list.append(return_lookback)
covs = return_lookback.cov().values
cov_list.append(covs)
df_cov = pd.DataFrame({'date':df.date.unique()[lookback:],'cov_list':cov_list,'return_list':return_list})
df = df.merge(df_cov, on='date')
df = df.sort_values(['date','tic']).reset_index(drop=True)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Part 5. Design EnvironmentConsidering the stochastic and interactive nature of the automated stock trading tasks, a financial task is modeled as a **Markov Decision Process (MDP)** problem. The training process involves observing stock price change, taking an action and reward's calculation to have the agent adjusting its strategy accordingly. By interacting with the environment, the trading agent will derive a trading strategy with the maximized rewards as time proceeds.Our trading environments, based on OpenAI Gym framework, simulate live stock markets with real market data according to the principle of time-driven simulation.The action space describes the allowed actions that the agent interacts with the environment. Normally, action a includes three actions: {-1, 0, 1}, where -1, 0, 1 represent selling, holding, and buying one share. Also, an action can be carried upon multiple shares. We use an action space {-k,…,-1, 0, 1, …, k}, where k denotes the number of shares to buy and -k denotes the number of shares to sell. For example, "Buy 10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or -10, respectively. The continuous action space needs to be normalized to [-1, 1], since the policy is defined on a Gaussian distribution, which needs to be normalized and symmetric. Training data split: 2009-01-01 to 2018-12-31
###Code
train = data_split(df, '2009-01-01','2020-07-01')
#trade = data_split(df, '2020-01-01', config.END_DATE)
train.head()
###Output
_____no_output_____
###Markdown
Environment for Portfolio Allocation
###Code
import numpy as np
import pandas as pd
from gym.utils import seeding
import gym
from gym import spaces
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from stable_baselines3.common.vec_env import DummyVecEnv
class StockPortfolioEnv(gym.Env):
"""A single stock trading environment for OpenAI gym
Attributes
----------
df: DataFrame
input data
stock_dim : int
number of unique stocks
hmax : int
maximum number of shares to trade
initial_amount : int
start money
transaction_cost_pct: float
transaction cost percentage per trade
reward_scaling: float
scaling factor for reward, good for training
state_space: int
the dimension of input features
action_space: int
equals stock dimension
tech_indicator_list: list
a list of technical indicator names
turbulence_threshold: int
a threshold to control risk aversion
day: int
an increment number to control date
Methods
-------
_sell_stock()
perform sell action based on the sign of the action
_buy_stock()
perform buy action based on the sign of the action
step()
at each step the agent will return actions, then
we will calculate the reward, and return the next observation.
reset()
reset the environment
render()
use render to return other functions
save_asset_memory()
return account value at each time step
save_action_memory()
return actions/positions at each time step
"""
metadata = {'render.modes': ['human']}
def __init__(self,
df,
stock_dim,
hmax,
initial_amount,
transaction_cost_pct,
reward_scaling,
state_space,
action_space,
tech_indicator_list,
turbulence_threshold=None,
lookback=252,
day = 0):
#super(StockEnv, self).__init__()
#money = 10 , scope = 1
self.day = day
self.lookback=lookback
self.df = df
self.stock_dim = stock_dim
self.hmax = hmax
self.initial_amount = initial_amount
self.transaction_cost_pct =transaction_cost_pct
self.reward_scaling = reward_scaling
self.state_space = state_space
self.action_space = action_space
self.tech_indicator_list = tech_indicator_list
# action_space normalization and shape is self.stock_dim
self.action_space = spaces.Box(low = 0, high = 1,shape = (self.action_space,))
# Shape = (34, 30)
# covariance matrix + technical indicators
self.observation_space = spaces.Box(low=-np.inf, high=np.inf, shape = (self.state_space+len(self.tech_indicator_list),self.state_space))
# load data from a pandas dataframe
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.terminal = False
self.turbulence_threshold = turbulence_threshold
# initalize state: inital portfolio return + individual stock return + individual weights
self.portfolio_value = self.initial_amount
# memorize portfolio value each step
self.asset_memory = [self.initial_amount]
# memorize portfolio return each step
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
def step(self, actions):
# print(self.day)
self.terminal = self.day >= len(self.df.index.unique())-1
# print(actions)
if self.terminal:
df = pd.DataFrame(self.portfolio_return_memory)
df.columns = ['daily_return']
plt.plot(df.daily_return.cumsum(),'r')
plt.savefig('results/cumulative_reward.png')
plt.close()
plt.plot(self.portfolio_return_memory,'r')
plt.savefig('results/rewards.png')
plt.close()
print("=================================")
print("begin_total_asset:{}".format(self.asset_memory[0]))
print("end_total_asset:{}".format(self.portfolio_value))
df_daily_return = pd.DataFrame(self.portfolio_return_memory)
df_daily_return.columns = ['daily_return']
if df_daily_return['daily_return'].std() !=0:
sharpe = (252**0.5)*df_daily_return['daily_return'].mean()/ \
df_daily_return['daily_return'].std()
print("Sharpe: ",sharpe)
print("=================================")
return self.state, self.reward, self.terminal,{}
else:
#print("Model actions: ",actions)
# actions are the portfolio weight
# normalize to sum of 1
#if (np.array(actions) - np.array(actions).min()).sum() != 0:
# norm_actions = (np.array(actions) - np.array(actions).min()) / (np.array(actions) - np.array(actions).min()).sum()
#else:
# norm_actions = actions
weights = self.softmax_normalization(actions)
#print("Normalized actions: ", weights)
self.actions_memory.append(weights)
last_day_memory = self.data
#load next state
self.day += 1
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
#print(self.state)
# calcualte portfolio return
# individual stocks' return * weight
portfolio_return = sum(((self.data.close.values / last_day_memory.close.values)-1)*weights)
# update portfolio value
new_portfolio_value = self.portfolio_value*(1+portfolio_return)
self.portfolio_value = new_portfolio_value
# save into memory
self.portfolio_return_memory.append(portfolio_return)
self.date_memory.append(self.data.date.unique()[0])
self.asset_memory.append(new_portfolio_value)
# the reward is the new portfolio value or end portfolo value
self.reward = new_portfolio_value
#print("Step reward: ", self.reward)
#self.reward = self.reward*self.reward_scaling
return self.state, self.reward, self.terminal, {}
def reset(self):
self.asset_memory = [self.initial_amount]
self.day = 0
self.data = self.df.loc[self.day,:]
# load states
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.portfolio_value = self.initial_amount
#self.cost = 0
#self.trades = 0
self.terminal = False
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
return self.state
def render(self, mode='human'):
return self.state
def softmax_normalization(self, actions):
numerator = np.exp(actions)
denominator = np.sum(np.exp(actions))
softmax_output = numerator/denominator
return softmax_output
def save_asset_memory(self):
date_list = self.date_memory
portfolio_return = self.portfolio_return_memory
#print(len(date_list))
#print(len(asset_list))
df_account_value = pd.DataFrame({'date':date_list,'daily_return':portfolio_return})
return df_account_value
def save_action_memory(self):
# date and close price length must match actions length
date_list = self.date_memory
df_date = pd.DataFrame(date_list)
df_date.columns = ['date']
action_list = self.actions_memory
df_actions = pd.DataFrame(action_list)
df_actions.columns = self.data.tic.values
df_actions.index = df_date.date
#df_actions = pd.DataFrame({'date':date_list,'actions':action_list})
return df_actions
def _seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def get_sb_env(self):
e = DummyVecEnv([lambda: self])
obs = e.reset()
return e, obs
stock_dimension = len(train.tic.unique())
state_space = stock_dimension
print(f"Stock Dimension: {stock_dimension}, State Space: {state_space}")
env_kwargs = {
"hmax": 100,
"initial_amount": 1000000,
"transaction_cost_pct": 0.001,
"state_space": state_space,
"stock_dim": stock_dimension,
"tech_indicator_list": config.TECHNICAL_INDICATORS_LIST,
"action_space": stock_dimension,
"reward_scaling": 1e-4
}
e_train_gym = StockPortfolioEnv(df = train, **env_kwargs)
env_train, _ = e_train_gym.get_sb_env()
print(type(env_train))
###Output
<class 'stable_baselines3.common.vec_env.dummy_vec_env.DummyVecEnv'>
###Markdown
Part 6: Implement DRL Algorithms* The implementation of the DRL algorithms are based on **OpenAI Baselines** and **Stable Baselines**. Stable Baselines is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups.* FinRL library includes fine-tuned standard DRL algorithms, such as DQN, DDPG,Multi-Agent DDPG, PPO, SAC, A2C and TD3. We also allow users todesign their own DRL algorithms by adapting these DRL algorithms.
###Code
# initialize
agent = DRLAgent(env = env_train)
###Output
_____no_output_____
###Markdown
Model 1: **A2C**
###Code
agent = DRLAgent(env = env_train)
A2C_PARAMS = {"n_steps": 5, "ent_coef": 0.005, "learning_rate": 0.0002}
model_a2c = agent.get_model(model_name="a2c",model_kwargs = A2C_PARAMS)
trained_a2c = agent.train_model(model=model_a2c,
tb_log_name='a2c',
total_timesteps=50000)
###Output
Logging to tensorboard_log/a2c/a2c_2
------------------------------------
| time/ | |
| fps | 352 |
| iterations | 100 |
| time_elapsed | 1 |
| total_timesteps | 500 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 12099 |
| policy_loss | 2.01e+08 |
| std | 0.959 |
| value_loss | 2.64e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 350 |
| iterations | 200 |
| time_elapsed | 2 |
| total_timesteps | 1000 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 12199 |
| policy_loss | 2.37e+08 |
| std | 0.958 |
| value_loss | 4.39e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 349 |
| iterations | 300 |
| time_elapsed | 4 |
| total_timesteps | 1500 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 12299 |
| policy_loss | 3.76e+08 |
| std | 0.958 |
| value_loss | 1.01e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 349 |
| iterations | 400 |
| time_elapsed | 5 |
| total_timesteps | 2000 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 12399 |
| policy_loss | 4.06e+08 |
| std | 0.958 |
| value_loss | 1.33e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 349 |
| iterations | 500 |
| time_elapsed | 7 |
| total_timesteps | 2500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 12499 |
| policy_loss | 5.86e+08 |
| std | 0.957 |
| value_loss | 2.7e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4685300.195654661
Sharpe: 1.0453114515340531
=================================
-------------------------------------
| time/ | |
| fps | 340 |
| iterations | 600 |
| time_elapsed | 8 |
| total_timesteps | 3000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 12599 |
| policy_loss | 1.81e+08 |
| std | 0.956 |
| value_loss | 2.19e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 341 |
| iterations | 700 |
| time_elapsed | 10 |
| total_timesteps | 3500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 12699 |
| policy_loss | 2.12e+08 |
| std | 0.956 |
| value_loss | 3.4e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 342 |
| iterations | 800 |
| time_elapsed | 11 |
| total_timesteps | 4000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 12799 |
| policy_loss | 3.43e+08 |
| std | 0.956 |
| value_loss | 8.32e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 342 |
| iterations | 900 |
| time_elapsed | 13 |
| total_timesteps | 4500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 12899 |
| policy_loss | 3.89e+08 |
| std | 0.955 |
| value_loss | 1.1e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 343 |
| iterations | 1000 |
| time_elapsed | 14 |
| total_timesteps | 5000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 12999 |
| policy_loss | 4.89e+08 |
| std | 0.955 |
| value_loss | 2.13e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4211670.620824253
Sharpe: 0.9836152322815558
=================================
------------------------------------
| time/ | |
| fps | 339 |
| iterations | 1100 |
| time_elapsed | 16 |
| total_timesteps | 5500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 13099 |
| policy_loss | 1.73e+08 |
| std | 0.954 |
| value_loss | 2.48e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 340 |
| iterations | 1200 |
| time_elapsed | 17 |
| total_timesteps | 6000 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 13199 |
| policy_loss | 2.48e+08 |
| std | 0.954 |
| value_loss | 4.39e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 340 |
| iterations | 1300 |
| time_elapsed | 19 |
| total_timesteps | 6500 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 13299 |
| policy_loss | 3.53e+08 |
| std | 0.953 |
| value_loss | 9.04e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 340 |
| iterations | 1400 |
| time_elapsed | 20 |
| total_timesteps | 7000 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 13399 |
| policy_loss | 3.96e+08 |
| std | 0.953 |
| value_loss | 1.21e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 341 |
| iterations | 1500 |
| time_elapsed | 21 |
| total_timesteps | 7500 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 13499 |
| policy_loss | 5.96e+08 |
| std | 0.953 |
| value_loss | 2.56e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4639828.316566328
Sharpe: 1.0428808028309948
=================================
------------------------------------
| time/ | |
| fps | 338 |
| iterations | 1600 |
| time_elapsed | 23 |
| total_timesteps | 8000 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 13599 |
| policy_loss | 1.93e+08 |
| std | 0.952 |
| value_loss | 2.53e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 339 |
| iterations | 1700 |
| time_elapsed | 25 |
| total_timesteps | 8500 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 13699 |
| policy_loss | 2.44e+08 |
| std | 0.952 |
| value_loss | 4.34e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 339 |
| iterations | 1800 |
| time_elapsed | 26 |
| total_timesteps | 9000 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 1.79e-07 |
| learning_rate | 0.0002 |
| n_updates | 13799 |
| policy_loss | 3.55e+08 |
| std | 0.952 |
| value_loss | 9.29e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 340 |
| iterations | 1900 |
| time_elapsed | 27 |
| total_timesteps | 9500 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 13899 |
| policy_loss | 4.14e+08 |
| std | 0.952 |
| value_loss | 1.31e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 340 |
| iterations | 2000 |
| time_elapsed | 29 |
| total_timesteps | 10000 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 13999 |
| policy_loss | 6.22e+08 |
| std | 0.951 |
| value_loss | 2.87e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4775229.061094982
Sharpe: 1.0650992139820405
=================================
------------------------------------
| time/ | |
| fps | 338 |
| iterations | 2100 |
| time_elapsed | 31 |
| total_timesteps | 10500 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 14099 |
| policy_loss | 1.82e+08 |
| std | 0.951 |
| value_loss | 2.24e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 338 |
| iterations | 2200 |
| time_elapsed | 32 |
| total_timesteps | 11000 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 14199 |
| policy_loss | 2.38e+08 |
| std | 0.95 |
| value_loss | 4.4e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 338 |
| iterations | 2300 |
| time_elapsed | 33 |
| total_timesteps | 11500 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 14299 |
| policy_loss | 3.63e+08 |
| std | 0.949 |
| value_loss | 9.98e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 339 |
| iterations | 2400 |
| time_elapsed | 35 |
| total_timesteps | 12000 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 14399 |
| policy_loss | 4.3e+08 |
| std | 0.948 |
| value_loss | 1.35e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 339 |
| iterations | 2500 |
| time_elapsed | 36 |
| total_timesteps | 12500 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 14499 |
| policy_loss | 6.25e+08 |
| std | 0.948 |
| value_loss | 2.75e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4678739.328824231
Sharpe: 1.0465241688702438
=================================
------------------------------------
| time/ | |
| fps | 338 |
| iterations | 2600 |
| time_elapsed | 38 |
| total_timesteps | 13000 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 14599 |
| policy_loss | 1.82e+08 |
| std | 0.948 |
| value_loss | 1.89e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 338 |
| iterations | 2700 |
| time_elapsed | 39 |
| total_timesteps | 13500 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 14699 |
| policy_loss | 2.21e+08 |
| std | 0.948 |
| value_loss | 3.95e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 338 |
| iterations | 2800 |
| time_elapsed | 41 |
| total_timesteps | 14000 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 14799 |
| policy_loss | 3.3e+08 |
| std | 0.948 |
| value_loss | 8.54e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 338 |
| iterations | 2900 |
| time_elapsed | 42 |
| total_timesteps | 14500 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 14899 |
| policy_loss | 4.24e+08 |
| std | 0.947 |
| value_loss | 1.26e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 339 |
| iterations | 3000 |
| time_elapsed | 44 |
| total_timesteps | 15000 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 14999 |
| policy_loss | 5.96e+08 |
| std | 0.947 |
| value_loss | 2.6e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4677079.483218055
Sharpe: 1.043334299291766
=================================
-------------------------------------
| time/ | |
| fps | 336 |
| iterations | 3100 |
| time_elapsed | 46 |
| total_timesteps | 15500 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 15099 |
| policy_loss | 1.66e+08 |
| std | 0.947 |
| value_loss | 2e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 3200 |
| time_elapsed | 47 |
| total_timesteps | 16000 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 15199 |
| policy_loss | 2.31e+08 |
| std | 0.947 |
| value_loss | 3.68e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 337 |
| iterations | 3300 |
| time_elapsed | 48 |
| total_timesteps | 16500 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 15299 |
| policy_loss | 3.32e+08 |
| std | 0.946 |
| value_loss | 8.59e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 337 |
| iterations | 3400 |
| time_elapsed | 50 |
| total_timesteps | 17000 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 15399 |
| policy_loss | 3.93e+08 |
| std | 0.945 |
| value_loss | 1.15e+14 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 337 |
| iterations | 3500 |
| time_elapsed | 51 |
| total_timesteps | 17500 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 15499 |
| policy_loss | 5.3e+08 |
| std | 0.944 |
| value_loss | 2.09e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4359923.802114374
Sharpe: 1.0008163852772658
=================================
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 3600 |
| time_elapsed | 53 |
| total_timesteps | 18000 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 1.79e-07 |
| learning_rate | 0.0002 |
| n_updates | 15599 |
| policy_loss | 1.7e+08 |
| std | 0.944 |
| value_loss | 1.92e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 3700 |
| time_elapsed | 54 |
| total_timesteps | 18500 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 15699 |
| policy_loss | 2.33e+08 |
| std | 0.943 |
| value_loss | 3.59e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 3800 |
| time_elapsed | 56 |
| total_timesteps | 19000 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 15799 |
| policy_loss | 3.35e+08 |
| std | 0.944 |
| value_loss | 8.35e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 337 |
| iterations | 3900 |
| time_elapsed | 57 |
| total_timesteps | 19500 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 15899 |
| policy_loss | 3.89e+08 |
| std | 0.944 |
| value_loss | 1.06e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 337 |
| iterations | 4000 |
| time_elapsed | 59 |
| total_timesteps | 20000 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 15999 |
| policy_loss | 5.99e+08 |
| std | 0.944 |
| value_loss | 2.18e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4518146.7620793665
Sharpe: 1.017512586785335
=================================
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 4100 |
| time_elapsed | 60 |
| total_timesteps | 20500 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 16099 |
| policy_loss | 1.81e+08 |
| std | 0.943 |
| value_loss | 2.24e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 336 |
| iterations | 4200 |
| time_elapsed | 62 |
| total_timesteps | 21000 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 16199 |
| policy_loss | 2.17e+08 |
| std | 0.943 |
| value_loss | 3.98e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 4300 |
| time_elapsed | 63 |
| total_timesteps | 21500 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 16299 |
| policy_loss | 3.55e+08 |
| std | 0.942 |
| value_loss | 9.99e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 4400 |
| time_elapsed | 65 |
| total_timesteps | 22000 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 16399 |
| policy_loss | 4.37e+08 |
| std | 0.942 |
| value_loss | 1.35e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 337 |
| iterations | 4500 |
| time_elapsed | 66 |
| total_timesteps | 22500 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 16499 |
| policy_loss | 5.77e+08 |
| std | 0.941 |
| value_loss | 2.56e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4947928.885546611
Sharpe: 1.0770541591532077
=================================
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 4600 |
| time_elapsed | 68 |
| total_timesteps | 23000 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 16599 |
| policy_loss | 1.56e+08 |
| std | 0.94 |
| value_loss | 1.75e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 4700 |
| time_elapsed | 69 |
| total_timesteps | 23500 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 16699 |
| policy_loss | 2.11e+08 |
| std | 0.94 |
| value_loss | 3.38e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 4800 |
| time_elapsed | 71 |
| total_timesteps | 24000 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 16799 |
| policy_loss | 3.25e+08 |
| std | 0.94 |
| value_loss | 8.02e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 4900 |
| time_elapsed | 72 |
| total_timesteps | 24500 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 16899 |
| policy_loss | 4.07e+08 |
| std | 0.94 |
| value_loss | 1.14e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5000 |
| time_elapsed | 74 |
| total_timesteps | 25000 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 16999 |
| policy_loss | 5.45e+08 |
| std | 0.939 |
| value_loss | 2.2e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4708435.905981962
Sharpe: 1.0421275396424545
=================================
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5100 |
| time_elapsed | 75 |
| total_timesteps | 25500 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 17099 |
| policy_loss | 1.74e+08 |
| std | 0.939 |
| value_loss | 2.32e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5200 |
| time_elapsed | 77 |
| total_timesteps | 26000 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 17199 |
| policy_loss | 2.26e+08 |
| std | 0.938 |
| value_loss | 3.94e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5300 |
| time_elapsed | 78 |
| total_timesteps | 26500 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 17299 |
| policy_loss | 3.16e+08 |
| std | 0.938 |
| value_loss | 7.8e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5400 |
| time_elapsed | 80 |
| total_timesteps | 27000 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 17399 |
| policy_loss | 3.95e+08 |
| std | 0.938 |
| value_loss | 1.14e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5500 |
| time_elapsed | 81 |
| total_timesteps | 27500 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 17499 |
| policy_loss | 6.04e+08 |
| std | 0.937 |
| value_loss | 2.22e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4591802.064526513
Sharpe: 1.0188228298492967
=================================
-------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5600 |
| time_elapsed | 83 |
| total_timesteps | 28000 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 17599 |
| policy_loss | 1.73e+08 |
| std | 0.937 |
| value_loss | 2.22e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5700 |
| time_elapsed | 84 |
| total_timesteps | 28500 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 17699 |
| policy_loss | 2.13e+08 |
| std | 0.937 |
| value_loss | 3.68e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5800 |
| time_elapsed | 86 |
| total_timesteps | 29000 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 17799 |
| policy_loss | 3.15e+08 |
| std | 0.937 |
| value_loss | 7.34e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5900 |
| time_elapsed | 87 |
| total_timesteps | 29500 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 17899 |
| policy_loss | 3.56e+08 |
| std | 0.936 |
| value_loss | 1.01e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 6000 |
| time_elapsed | 89 |
| total_timesteps | 30000 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 17999 |
| policy_loss | 5.88e+08 |
| std | 0.935 |
| value_loss | 2.08e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4389104.288134387
Sharpe: 0.9933788463870157
=================================
------------------------------------
| time/ | |
| fps | 335 |
| iterations | 6100 |
| time_elapsed | 90 |
| total_timesteps | 30500 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 18099 |
| policy_loss | 1.72e+08 |
| std | 0.935 |
| value_loss | 2.2e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 336 |
| iterations | 6200 |
| time_elapsed | 92 |
| total_timesteps | 31000 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 18199 |
| policy_loss | 2.32e+08 |
| std | 0.934 |
| value_loss | 3.84e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 336 |
| iterations | 6300 |
| time_elapsed | 93 |
| total_timesteps | 31500 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 18299 |
| policy_loss | 3.14e+08 |
| std | 0.935 |
| value_loss | 7.79e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 6400 |
| time_elapsed | 95 |
| total_timesteps | 32000 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 18399 |
| policy_loss | 3.81e+08 |
| std | 0.934 |
| value_loss | 9.57e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 6500 |
| time_elapsed | 96 |
| total_timesteps | 32500 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 18499 |
| policy_loss | 5.48e+08 |
| std | 0.933 |
| value_loss | 2.3e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4580263.352082179
Sharpe: 1.0226861102653615
=================================
------------------------------------
| time/ | |
| fps | 335 |
| iterations | 6600 |
| time_elapsed | 98 |
| total_timesteps | 33000 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 18599 |
| policy_loss | 1.57e+08 |
| std | 0.933 |
| value_loss | 1.92e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 335 |
| iterations | 6700 |
| time_elapsed | 99 |
| total_timesteps | 33500 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 18699 |
| policy_loss | 2.32e+08 |
| std | 0.933 |
| value_loss | 3.7e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 335 |
| iterations | 6800 |
| time_elapsed | 101 |
| total_timesteps | 34000 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 18799 |
| policy_loss | 3.06e+08 |
| std | 0.933 |
| value_loss | 7.37e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 334 |
| iterations | 6900 |
| time_elapsed | 103 |
| total_timesteps | 34500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 18899 |
| policy_loss | 3.57e+08 |
| std | 0.932 |
| value_loss | 9.15e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 334 |
| iterations | 7000 |
| time_elapsed | 104 |
| total_timesteps | 35000 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 18999 |
| policy_loss | 5.49e+08 |
| std | 0.931 |
| value_loss | 2.57e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4583070.081894048
Sharpe: 1.0296700608185065
=================================
------------------------------------
| time/ | |
| fps | 333 |
| iterations | 7100 |
| time_elapsed | 106 |
| total_timesteps | 35500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 19099 |
| policy_loss | 1.68e+08 |
| std | 0.931 |
| value_loss | 1.88e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 333 |
| iterations | 7200 |
| time_elapsed | 107 |
| total_timesteps | 36000 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 19199 |
| policy_loss | 2.09e+08 |
| std | 0.931 |
| value_loss | 3.39e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 333 |
| iterations | 7300 |
| time_elapsed | 109 |
| total_timesteps | 36500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 19299 |
| policy_loss | 3.17e+08 |
| std | 0.931 |
| value_loss | 7.95e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 333 |
| iterations | 7400 |
| time_elapsed | 111 |
| total_timesteps | 37000 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 19399 |
| policy_loss | 3.68e+08 |
| std | 0.931 |
| value_loss | 9.27e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 332 |
| iterations | 7500 |
| time_elapsed | 112 |
| total_timesteps | 37500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 19499 |
| policy_loss | 6.09e+08 |
| std | 0.931 |
| value_loss | 2.31e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4576426.405502999
Sharpe: 1.0235768164756291
=================================
------------------------------------
| time/ | |
| fps | 332 |
| iterations | 7600 |
| time_elapsed | 114 |
| total_timesteps | 38000 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 19599 |
| policy_loss | 1.59e+08 |
| std | 0.931 |
| value_loss | 2.02e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 331 |
| iterations | 7700 |
| time_elapsed | 115 |
| total_timesteps | 38500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 19699 |
| policy_loss | 2.21e+08 |
| std | 0.93 |
| value_loss | 3.36e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 331 |
| iterations | 7800 |
| time_elapsed | 117 |
| total_timesteps | 39000 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 19799 |
| policy_loss | 3.26e+08 |
| std | 0.93 |
| value_loss | 8.54e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 331 |
| iterations | 7900 |
| time_elapsed | 119 |
| total_timesteps | 39500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 19899 |
| policy_loss | 3.73e+08 |
| std | 0.93 |
| value_loss | 1.15e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 331 |
| iterations | 8000 |
| time_elapsed | 120 |
| total_timesteps | 40000 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 19999 |
| policy_loss | 5.89e+08 |
| std | 0.929 |
| value_loss | 2.49e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4940621.780834714
Sharpe: 1.0767272532158483
=================================
------------------------------------
| time/ | |
| fps | 330 |
| iterations | 8100 |
| time_elapsed | 122 |
| total_timesteps | 40500 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 20099 |
| policy_loss | 1.5e+08 |
| std | 0.928 |
| value_loss | 1.82e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 330 |
| iterations | 8200 |
| time_elapsed | 123 |
| total_timesteps | 41000 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 20199 |
| policy_loss | 1.78e+08 |
| std | 0.928 |
| value_loss | 2.61e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 330 |
| iterations | 8300 |
| time_elapsed | 125 |
| total_timesteps | 41500 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 20299 |
| policy_loss | 3.09e+08 |
| std | 0.927 |
| value_loss | 6.16e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 330 |
| iterations | 8400 |
| time_elapsed | 127 |
| total_timesteps | 42000 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 20399 |
| policy_loss | 3.35e+08 |
| std | 0.927 |
| value_loss | 9.63e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 330 |
| iterations | 8500 |
| time_elapsed | 128 |
| total_timesteps | 42500 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 20499 |
| policy_loss | 5.1e+08 |
| std | 0.927 |
| value_loss | 1.7e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4118580.1744499537
Sharpe: 0.9620511561976229
=================================
-------------------------------------
| time/ | |
| fps | 329 |
| iterations | 8600 |
| time_elapsed | 130 |
| total_timesteps | 43000 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 20599 |
| policy_loss | 1.52e+08 |
| std | 0.927 |
| value_loss | 1.83e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 329 |
| iterations | 8700 |
| time_elapsed | 131 |
| total_timesteps | 43500 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | 2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 20699 |
| policy_loss | 2.01e+08 |
| std | 0.927 |
| value_loss | 2.66e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 329 |
| iterations | 8800 |
| time_elapsed | 133 |
| total_timesteps | 44000 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 20799 |
| policy_loss | 2.8e+08 |
| std | 0.927 |
| value_loss | 6.24e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 329 |
| iterations | 8900 |
| time_elapsed | 135 |
| total_timesteps | 44500 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 20899 |
| policy_loss | 3.31e+08 |
| std | 0.926 |
| value_loss | 9.61e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 329 |
| iterations | 9000 |
| time_elapsed | 136 |
| total_timesteps | 45000 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 20999 |
| policy_loss | 4.87e+08 |
| std | 0.926 |
| value_loss | 1.68e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4246562.525827188
Sharpe: 0.9779057432228896
=================================
------------------------------------
| time/ | |
| fps | 329 |
| iterations | 9100 |
| time_elapsed | 138 |
| total_timesteps | 45500 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 21099 |
| policy_loss | 1.54e+08 |
| std | 0.926 |
| value_loss | 1.7e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 329 |
| iterations | 9200 |
| time_elapsed | 139 |
| total_timesteps | 46000 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 21199 |
| policy_loss | 1.94e+08 |
| std | 0.925 |
| value_loss | 2.63e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 329 |
| iterations | 9300 |
| time_elapsed | 141 |
| total_timesteps | 46500 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 21299 |
| policy_loss | 2.97e+08 |
| std | 0.925 |
| value_loss | 6.54e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 329 |
| iterations | 9400 |
| time_elapsed | 142 |
| total_timesteps | 47000 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 21399 |
| policy_loss | 3.59e+08 |
| std | 0.925 |
| value_loss | 9.92e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 329 |
| iterations | 9500 |
| time_elapsed | 144 |
| total_timesteps | 47500 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 21499 |
| policy_loss | 4.6e+08 |
| std | 0.924 |
| value_loss | 1.94e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4580219.125422531
Sharpe: 1.0290957953577193
=================================
------------------------------------
| time/ | |
| fps | 328 |
| iterations | 9600 |
| time_elapsed | 146 |
| total_timesteps | 48000 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 21599 |
| policy_loss | 1.49e+08 |
| std | 0.923 |
| value_loss | 1.61e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 328 |
| iterations | 9700 |
| time_elapsed | 147 |
| total_timesteps | 48500 |
| train/ | |
| entropy_loss | -40.1 |
| explained_variance | 2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 21699 |
| policy_loss | 1.83e+08 |
| std | 0.923 |
| value_loss | 2.48e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 328 |
| iterations | 9800 |
| time_elapsed | 149 |
| total_timesteps | 49000 |
| train/ | |
| entropy_loss | -40.1 |
| explained_variance | 1.79e-07 |
| learning_rate | 0.0002 |
| n_updates | 21799 |
| policy_loss | 3.18e+08 |
| std | 0.922 |
| value_loss | 6.2e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 328 |
| iterations | 9900 |
| time_elapsed | 150 |
| total_timesteps | 49500 |
| train/ | |
| entropy_loss | -40.1 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 21899 |
| policy_loss | 3.49e+08 |
| std | 0.922 |
| value_loss | 8.39e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 328 |
| iterations | 10000 |
| time_elapsed | 152 |
| total_timesteps | 50000 |
| train/ | |
| entropy_loss | -40.1 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 21999 |
| policy_loss | 4.71e+08 |
| std | 0.921 |
| value_loss | 1.69e+14 |
------------------------------------
###Markdown
Model 2: **PPO**
###Code
agent = DRLAgent(env = env_train)
PPO_PARAMS = {
"n_steps": 2048,
"ent_coef": 0.005,
"learning_rate": 0.0001,
"batch_size": 128,
}
model_ppo = agent.get_model("ppo",model_kwargs = PPO_PARAMS)
trained_ppo = agent.train_model(model=model_ppo,
tb_log_name='ppo',
total_timesteps=80000)
###Output
Logging to tensorboard_log/ppo/ppo_3
-----------------------------
| time/ | |
| fps | 458 |
| iterations | 1 |
| time_elapsed | 4 |
| total_timesteps | 2048 |
-----------------------------
=================================
begin_total_asset:1000000
end_total_asset:4917364.6278486075
Sharpe: 1.074414829116363
=================================
--------------------------------------------
| time/ | |
| fps | 391 |
| iterations | 2 |
| time_elapsed | 10 |
| total_timesteps | 4096 |
| train/ | |
| approx_kl | -7.8231096e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.71e+14 |
| learning_rate | 0.0001 |
| loss | 7.78e+14 |
| n_updates | 10 |
| policy_gradient_loss | -6.16e-07 |
| std | 1 |
| value_loss | 1.57e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4996331.100586685
Sharpe: 1.0890927964884638
=================================
--------------------------------------------
| time/ | |
| fps | 373 |
| iterations | 3 |
| time_elapsed | 16 |
| total_timesteps | 6144 |
| train/ | |
| approx_kl | -3.5390258e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -8.76e+14 |
| learning_rate | 0.0001 |
| loss | 1.1e+15 |
| n_updates | 20 |
| policy_gradient_loss | -4.29e-07 |
| std | 1 |
| value_loss | 2.33e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4751039.2878817525
Sharpe: 1.0560179406423764
=================================
--------------------------------------------
| time/ | |
| fps | 365 |
| iterations | 4 |
| time_elapsed | 22 |
| total_timesteps | 8192 |
| train/ | |
| approx_kl | -1.6763806e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -8.01e+15 |
| learning_rate | 0.0001 |
| loss | 1.25e+15 |
| n_updates | 30 |
| policy_gradient_loss | -5.58e-07 |
| std | 1 |
| value_loss | 2.59e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4769059.347696523
Sharpe: 1.056814654380227
=================================
--------------------------------------------
| time/ | |
| fps | 360 |
| iterations | 5 |
| time_elapsed | 28 |
| total_timesteps | 10240 |
| train/ | |
| approx_kl | -5.5879354e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -2.55e+16 |
| learning_rate | 0.0001 |
| loss | 1.24e+15 |
| n_updates | 40 |
| policy_gradient_loss | -4.9e-07 |
| std | 1 |
| value_loss | 2.7e+15 |
--------------------------------------------
--------------------------------------------
| time/ | |
| fps | 358 |
| iterations | 6 |
| time_elapsed | 34 |
| total_timesteps | 12288 |
| train/ | |
| approx_kl | 1.13621354e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -9.17e+16 |
| learning_rate | 0.0001 |
| loss | 1.35e+15 |
| n_updates | 50 |
| policy_gradient_loss | -4.28e-07 |
| std | 1 |
| value_loss | 2.77e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4816491.86007194
Sharpe: 1.0636199939613733
=================================
-------------------------------------------
| time/ | |
| fps | 356 |
| iterations | 7 |
| time_elapsed | 40 |
| total_timesteps | 14336 |
| train/ | |
| approx_kl | 3.5390258e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.42e+17 |
| learning_rate | 0.0001 |
| loss | 1.03e+15 |
| n_updates | 60 |
| policy_gradient_loss | -6.52e-07 |
| std | 1 |
| value_loss | 1.94e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4631919.83090099
Sharpe: 1.0396504731290799
=================================
-------------------------------------------
| time/ | |
| fps | 354 |
| iterations | 8 |
| time_elapsed | 46 |
| total_timesteps | 16384 |
| train/ | |
| approx_kl | 1.7508864e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.93e+17 |
| learning_rate | 0.0001 |
| loss | 9.83e+14 |
| n_updates | 70 |
| policy_gradient_loss | -5.78e-07 |
| std | 1 |
| value_loss | 2.06e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4728763.286321457
Sharpe: 1.052390302374202
=================================
-------------------------------------------
| time/ | |
| fps | 353 |
| iterations | 9 |
| time_elapsed | 52 |
| total_timesteps | 18432 |
| train/ | |
| approx_kl | 4.4703484e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.72e+18 |
| learning_rate | 0.0001 |
| loss | 1.25e+15 |
| n_updates | 80 |
| policy_gradient_loss | -4.84e-07 |
| std | 1 |
| value_loss | 2.33e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4439983.024798136
Sharpe: 1.013829383303325
=================================
--------------------------------------------
| time/ | |
| fps | 352 |
| iterations | 10 |
| time_elapsed | 58 |
| total_timesteps | 20480 |
| train/ | |
| approx_kl | -1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.7e+18 |
| learning_rate | 0.0001 |
| loss | 1.17e+15 |
| n_updates | 90 |
| policy_gradient_loss | -4.82e-07 |
| std | 1 |
| value_loss | 2.58e+15 |
--------------------------------------------
-------------------------------------------
| time/ | |
| fps | 352 |
| iterations | 11 |
| time_elapsed | 63 |
| total_timesteps | 22528 |
| train/ | |
| approx_kl | -9.313226e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.85e+18 |
| learning_rate | 0.0001 |
| loss | 1.2e+15 |
| n_updates | 100 |
| policy_gradient_loss | -5.2e-07 |
| std | 1 |
| value_loss | 2.51e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5048884.524536961
Sharpe: 1.0963911876706685
=================================
-------------------------------------------
| time/ | |
| fps | 351 |
| iterations | 12 |
| time_elapsed | 69 |
| total_timesteps | 24576 |
| train/ | |
| approx_kl | 3.7252903e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -2.67e+18 |
| learning_rate | 0.0001 |
| loss | 1.44e+15 |
| n_updates | 110 |
| policy_gradient_loss | -4.53e-07 |
| std | 1 |
| value_loss | 2.8e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4824229.456193555
Sharpe: 1.0648549464252506
=================================
-------------------------------------------
| time/ | |
| fps | 351 |
| iterations | 13 |
| time_elapsed | 75 |
| total_timesteps | 26624 |
| train/ | |
| approx_kl | 3.3527613e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.38e+18 |
| learning_rate | 0.0001 |
| loss | 7.89e+14 |
| n_updates | 120 |
| policy_gradient_loss | -6.06e-07 |
| std | 1 |
| value_loss | 1.76e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4602974.615591427
Sharpe: 1.034753433280377
=================================
-------------------------------------------
| time/ | |
| fps | 350 |
| iterations | 14 |
| time_elapsed | 81 |
| total_timesteps | 28672 |
| train/ | |
| approx_kl | 8.8475645e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.75e+19 |
| learning_rate | 0.0001 |
| loss | 1.23e+15 |
| n_updates | 130 |
| policy_gradient_loss | -5.8e-07 |
| std | 1 |
| value_loss | 2.27e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4608422.583401322
Sharpe: 1.035300880612428
=================================
-------------------------------------------
| time/ | |
| fps | 349 |
| iterations | 15 |
| time_elapsed | 87 |
| total_timesteps | 30720 |
| train/ | |
| approx_kl | 1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.71e+18 |
| learning_rate | 0.0001 |
| loss | 1.22e+15 |
| n_updates | 140 |
| policy_gradient_loss | -5.63e-07 |
| std | 1 |
| value_loss | 2.39e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4826869.636472441
Sharpe: 1.0676330284861433
=================================
--------------------------------------------
| time/ | |
| fps | 348 |
| iterations | 16 |
| time_elapsed | 94 |
| total_timesteps | 32768 |
| train/ | |
| approx_kl | -1.4901161e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.51e+19 |
| learning_rate | 0.0001 |
| loss | 1.22e+15 |
| n_updates | 150 |
| policy_gradient_loss | -5.78e-07 |
| std | 1 |
| value_loss | 2.7e+15 |
--------------------------------------------
-------------------------------------------
| time/ | |
| fps | 346 |
| iterations | 17 |
| time_elapsed | 100 |
| total_timesteps | 34816 |
| train/ | |
| approx_kl | -5.401671e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.48e+19 |
| learning_rate | 0.0001 |
| loss | 1.48e+15 |
| n_updates | 160 |
| policy_gradient_loss | -3.96e-07 |
| std | 1 |
| value_loss | 2.81e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4364006.929301854
Sharpe: 1.002176631256902
=================================
--------------------------------------------
| time/ | |
| fps | 345 |
| iterations | 18 |
| time_elapsed | 106 |
| total_timesteps | 36864 |
| train/ | |
| approx_kl | -1.0803342e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.15e+19 |
| learning_rate | 0.0001 |
| loss | 8.41e+14 |
| n_updates | 170 |
| policy_gradient_loss | -4.91e-07 |
| std | 1 |
| value_loss | 1.58e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4796634.5596691
Sharpe: 1.0678319491053092
=================================
--------------------------------------------
| time/ | |
| fps | 344 |
| iterations | 19 |
| time_elapsed | 112 |
| total_timesteps | 38912 |
| train/ | |
| approx_kl | -1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -4.21e+19 |
| learning_rate | 0.0001 |
| loss | 1.03e+15 |
| n_updates | 180 |
| policy_gradient_loss | -5.6e-07 |
| std | 1 |
| value_loss | 2.02e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4969786.413399254
Sharpe: 1.0823021486710163
=================================
--------------------------------------------
| time/ | |
| fps | 344 |
| iterations | 20 |
| time_elapsed | 118 |
| total_timesteps | 40960 |
| train/ | |
| approx_kl | -6.7055225e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.41e+19 |
| learning_rate | 0.0001 |
| loss | 1.22e+15 |
| n_updates | 190 |
| policy_gradient_loss | -2.87e-07 |
| std | 1 |
| value_loss | 2.4e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4885480.801922398
Sharpe: 1.0729451877791811
=================================
--------------------------------------------
| time/ | |
| fps | 343 |
| iterations | 21 |
| time_elapsed | 125 |
| total_timesteps | 43008 |
| train/ | |
| approx_kl | -5.5879354e-09 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.85e+19 |
| learning_rate | 0.0001 |
| loss | 1.62e+15 |
| n_updates | 200 |
| policy_gradient_loss | -5.24e-07 |
| std | 1 |
| value_loss | 2.95e+15 |
--------------------------------------------
-------------------------------------------
| time/ | |
| fps | 343 |
| iterations | 22 |
| time_elapsed | 131 |
| total_timesteps | 45056 |
| train/ | |
| approx_kl | 1.8067658e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.01e+19 |
| learning_rate | 0.0001 |
| loss | 1.34e+15 |
| n_updates | 210 |
| policy_gradient_loss | -4.62e-07 |
| std | 1 |
| value_loss | 2.93e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5613709.009268909
Sharpe: 1.1673870008513114
=================================
--------------------------------------------
| time/ | |
| fps | 342 |
| iterations | 23 |
| time_elapsed | 137 |
| total_timesteps | 47104 |
| train/ | |
| approx_kl | -2.0489097e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.72e+19 |
| learning_rate | 0.0001 |
| loss | 1.41e+15 |
| n_updates | 220 |
| policy_gradient_loss | -4.78e-07 |
| std | 1 |
| value_loss | 2.71e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5043800.590470289
Sharpe: 1.0953673306850924
=================================
-------------------------------------------
| time/ | |
| fps | 342 |
| iterations | 24 |
| time_elapsed | 143 |
| total_timesteps | 49152 |
| train/ | |
| approx_kl | 2.4214387e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.37e+20 |
| learning_rate | 0.0001 |
| loss | 1.01e+15 |
| n_updates | 230 |
| policy_gradient_loss | -5.28e-07 |
| std | 1 |
| value_loss | 2.26e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4776576.852863929
Sharpe: 1.0593811754233755
=================================
-------------------------------------------
| time/ | |
| fps | 342 |
| iterations | 25 |
| time_elapsed | 149 |
| total_timesteps | 51200 |
| train/ | |
| approx_kl | 4.4703484e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.27e+20 |
| learning_rate | 0.0001 |
| loss | 1.21e+15 |
| n_updates | 240 |
| policy_gradient_loss | -4.82e-07 |
| std | 1 |
| value_loss | 2.46e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4468393.200157898
Sharpe: 1.0192746589767419
=================================
-------------------------------------------
| time/ | |
| fps | 341 |
| iterations | 26 |
| time_elapsed | 156 |
| total_timesteps | 53248 |
| train/ | |
| approx_kl | 2.6077032e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.96e+20 |
| learning_rate | 0.0001 |
| loss | 1.31e+15 |
| n_updates | 250 |
| policy_gradient_loss | -5.36e-07 |
| std | 1 |
| value_loss | 2.59e+15 |
-------------------------------------------
--------------------------------------------
| time/ | |
| fps | 341 |
| iterations | 27 |
| time_elapsed | 162 |
| total_timesteps | 55296 |
| train/ | |
| approx_kl | -1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.68e+20 |
| learning_rate | 0.0001 |
| loss | 1.33e+15 |
| n_updates | 260 |
| policy_gradient_loss | -3.77e-07 |
| std | 1 |
| value_loss | 2.51e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4875234.39450474
Sharpe: 1.0721137742534572
=================================
--------------------------------------------
| time/ | |
| fps | 340 |
| iterations | 28 |
| time_elapsed | 168 |
| total_timesteps | 57344 |
| train/ | |
| approx_kl | -1.2479722e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.66e+20 |
| learning_rate | 0.0001 |
| loss | 1.59e+15 |
| n_updates | 270 |
| policy_gradient_loss | -4.61e-07 |
| std | 1 |
| value_loss | 2.8e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4600459.210918712
Sharpe: 1.034756153745345
=================================
-------------------------------------------
| time/ | |
| fps | 340 |
| iterations | 29 |
| time_elapsed | 174 |
| total_timesteps | 59392 |
| train/ | |
| approx_kl | -4.284084e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.26e+20 |
| learning_rate | 0.0001 |
| loss | 8.07e+14 |
| n_updates | 280 |
| policy_gradient_loss | -5.44e-07 |
| std | 1 |
| value_loss | 1.62e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4526188.381438201
Sharpe: 1.0293846869900876
=================================
--------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 30 |
| time_elapsed | 180 |
| total_timesteps | 61440 |
| train/ | |
| approx_kl | -2.4214387e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.44e+20 |
| learning_rate | 0.0001 |
| loss | 1.12e+15 |
| n_updates | 290 |
| policy_gradient_loss | -5.65e-07 |
| std | 1 |
| value_loss | 2.1e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4487836.803716703
Sharpe: 1.010974660894394
=================================
--------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 31 |
| time_elapsed | 187 |
| total_timesteps | 63488 |
| train/ | |
| approx_kl | -2.6077032e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -4.47e+20 |
| learning_rate | 0.0001 |
| loss | 1.14e+15 |
| n_updates | 300 |
| policy_gradient_loss | -4.8e-07 |
| std | 1 |
| value_loss | 2.25e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4480729.650671386
Sharpe: 1.0219085518652522
=================================
--------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 32 |
| time_elapsed | 193 |
| total_timesteps | 65536 |
| train/ | |
| approx_kl | -2.0302832e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.87e+20 |
| learning_rate | 0.0001 |
| loss | 1.28e+15 |
| n_updates | 310 |
| policy_gradient_loss | -4.4e-07 |
| std | 1 |
| value_loss | 2.51e+15 |
--------------------------------------------
------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 33 |
| time_elapsed | 199 |
| total_timesteps | 67584 |
| train/ | |
| approx_kl | 1.359731e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.68e+20 |
| learning_rate | 0.0001 |
| loss | 1.24e+15 |
| n_updates | 320 |
| policy_gradient_loss | -4.51e-07 |
| std | 1 |
| value_loss | 2.66e+15 |
------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4399373.734699048
Sharpe: 1.005407087483561
=================================
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 34 |
| time_elapsed | 205 |
| total_timesteps | 69632 |
| train/ | |
| approx_kl | 2.2351742e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -2.29e+20 |
| learning_rate | 0.0001 |
| loss | 8.5e+14 |
| n_updates | 330 |
| policy_gradient_loss | -5.56e-07 |
| std | 1 |
| value_loss | 1.64e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4305742.921261859
Sharpe: 0.9945061913961891
=================================
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 35 |
| time_elapsed | 211 |
| total_timesteps | 71680 |
| train/ | |
| approx_kl | 1.3411045e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.11e+20 |
| learning_rate | 0.0001 |
| loss | 7.97e+14 |
| n_updates | 340 |
| policy_gradient_loss | -6.48e-07 |
| std | 1 |
| value_loss | 1.8e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4794175.629957249
Sharpe: 1.0611635246548963
=================================
--------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 36 |
| time_elapsed | 217 |
| total_timesteps | 73728 |
| train/ | |
| approx_kl | -3.3527613e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.16e+21 |
| learning_rate | 0.0001 |
| loss | 1.07e+15 |
| n_updates | 350 |
| policy_gradient_loss | -4.82e-07 |
| std | 1 |
| value_loss | 2.06e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4467487.416264421
Sharpe: 1.021012208464475
=================================
------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 37 |
| time_elapsed | 224 |
| total_timesteps | 75776 |
| train/ | |
| approx_kl | 5.401671e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -9.89e+20 |
| learning_rate | 0.0001 |
| loss | 1.46e+15 |
| n_updates | 360 |
| policy_gradient_loss | -4.78e-07 |
| std | 1 |
| value_loss | 2.75e+15 |
------------------------------------------
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 38 |
| time_elapsed | 229 |
| total_timesteps | 77824 |
| train/ | |
| approx_kl | 1.6763806e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.64e+20 |
| learning_rate | 0.0001 |
| loss | 1.25e+15 |
| n_updates | 370 |
| policy_gradient_loss | -4.54e-07 |
| std | 1 |
| value_loss | 2.57e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4806649.219027834
Sharpe: 1.0604486398186765
=================================
------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 39 |
| time_elapsed | 236 |
| total_timesteps | 79872 |
| train/ | |
| approx_kl | 4.284084e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.96e+20 |
| learning_rate | 0.0001 |
| loss | 1.28e+15 |
| n_updates | 380 |
| policy_gradient_loss | -5.9e-07 |
| std | 1 |
| value_loss | 2.44e+15 |
------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4653147.508966551
Sharpe: 1.043189911078732
=================================
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 40 |
| time_elapsed | 242 |
| total_timesteps | 81920 |
| train/ | |
| approx_kl | 6.3329935e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.04e+21 |
| learning_rate | 0.0001 |
| loss | 1.01e+15 |
| n_updates | 390 |
| policy_gradient_loss | -5.33e-07 |
| std | 1 |
| value_loss | 1.82e+15 |
-------------------------------------------
###Markdown
Model 3: **DDPG**
###Code
agent = DRLAgent(env = env_train)
DDPG_PARAMS = {"batch_size": 128, "buffer_size": 50000, "learning_rate": 0.001}
model_ddpg = agent.get_model("ddpg",model_kwargs = DDPG_PARAMS)
trained_ddpg = agent.train_model(model=model_ddpg,
tb_log_name='ddpg',
total_timesteps=50000)
###Output
Logging to tensorboard_log/ddpg/ddpg_2
=================================
begin_total_asset:1000000
end_total_asset:4625995.900359718
Sharpe: 1.040202670783119
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 4 |
| fps | 22 |
| time_elapsed | 439 |
| total timesteps | 10064 |
| train/ | |
| actor_loss | -6.99e+07 |
| critic_loss | 7.27e+12 |
| learning_rate | 0.001 |
| n_updates | 7548 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 8 |
| fps | 20 |
| time_elapsed | 980 |
| total timesteps | 20128 |
| train/ | |
| actor_loss | -1.44e+08 |
| critic_loss | 1.81e+13 |
| learning_rate | 0.001 |
| n_updates | 17612 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 12 |
| fps | 19 |
| time_elapsed | 1542 |
| total timesteps | 30192 |
| train/ | |
| actor_loss | -1.88e+08 |
| critic_loss | 2.72e+13 |
| learning_rate | 0.001 |
| n_updates | 27676 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 16 |
| fps | 18 |
| time_elapsed | 2133 |
| total timesteps | 40256 |
| train/ | |
| actor_loss | -2.15e+08 |
| critic_loss | 3.45e+13 |
| learning_rate | 0.001 |
| n_updates | 37740 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
---------------------------------
| time/ | |
| episodes | 20 |
| fps | 17 |
| time_elapsed | 2874 |
| total timesteps | 50320 |
| train/ | |
| actor_loss | -2.3e+08 |
| critic_loss | 4.05e+13 |
| learning_rate | 0.001 |
| n_updates | 47804 |
---------------------------------
###Markdown
Model 4: **SAC**
###Code
agent = DRLAgent(env = env_train)
SAC_PARAMS = {
"batch_size": 128,
"buffer_size": 100000,
"learning_rate": 0.0003,
"learning_starts": 100,
"ent_coef": "auto_0.1",
}
model_sac = agent.get_model("sac",model_kwargs = SAC_PARAMS)
trained_sac = agent.train_model(model=model_sac,
tb_log_name='sac',
total_timesteps=50000)
###Output
Logging to tensorboard_log/sac/sac_1
=================================
begin_total_asset:1000000
end_total_asset:4449463.498168942
Sharpe: 1.01245667390232
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418643.239765096
Sharpe: 1.0135796594260282
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418644.1960784905
Sharpe: 1.0135797537524718
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418659.429680678
Sharpe: 1.013581852537709
=================================
----------------------------------
| time/ | |
| episodes | 4 |
| fps | 12 |
| time_elapsed | 783 |
| total timesteps | 10064 |
| train/ | |
| actor_loss | -8.83e+07 |
| critic_loss | 6.57e+12 |
| ent_coef | 2.24 |
| ent_coef_loss | -205 |
| learning_rate | 0.0003 |
| n_updates | 9963 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4418651.576406099
Sharpe: 1.013581224026754
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418670.948269031
Sharpe: 1.0135838030234754
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418682.278829884
Sharpe: 1.013585596968056
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418791.911955293
Sharpe: 1.0136007328171013
=================================
----------------------------------
| time/ | |
| episodes | 8 |
| fps | 12 |
| time_elapsed | 1585 |
| total timesteps | 20128 |
| train/ | |
| actor_loss | -1.51e+08 |
| critic_loss | 1.12e+13 |
| ent_coef | 41.7 |
| ent_coef_loss | -670 |
| learning_rate | 0.0003 |
| n_updates | 20027 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4418737.365107464
Sharpe: 1.0135970410224868
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418754.895735274
Sharpe: 1.0135965589029627
=================================
=================================
begin_total_asset:1000000
end_total_asset:4419325.814567342
Sharpe: 1.0136807224228588
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418142.473513333
Sharpe: 1.0135234795926031
=================================
----------------------------------
| time/ | |
| episodes | 12 |
| fps | 12 |
| time_elapsed | 2400 |
| total timesteps | 30192 |
| train/ | |
| actor_loss | -1.85e+08 |
| critic_loss | 1.87e+13 |
| ent_coef | 725 |
| ent_coef_loss | -673 |
| learning_rate | 0.0003 |
| n_updates | 30091 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4422046.188863339
Sharpe: 1.0140936726052256
=================================
=================================
begin_total_asset:1000000
end_total_asset:4424919.463828854
Sharpe: 1.014521127041106
=================================
=================================
begin_total_asset:1000000
end_total_asset:4427483.152494239
Sharpe: 1.0148626804754584
=================================
=================================
begin_total_asset:1000000
end_total_asset:4460697.650185859
Sharpe: 1.019852362102548
=================================
----------------------------------
| time/ | |
| episodes | 16 |
| fps | 12 |
| time_elapsed | 3210 |
| total timesteps | 40256 |
| train/ | |
| actor_loss | -1.93e+08 |
| critic_loss | 1.62e+13 |
| ent_coef | 1.01e+04 |
| ent_coef_loss | -238 |
| learning_rate | 0.0003 |
| n_updates | 40155 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4434035.982803257
Sharpe: 1.0161512551319891
=================================
=================================
begin_total_asset:1000000
end_total_asset:4454728.906041551
Sharpe: 1.018484863448905
=================================
=================================
begin_total_asset:1000000
end_total_asset:4475667.120269234
Sharpe: 1.0215545521682856
=================================
###Markdown
Model 5: **TD3**
###Code
agent = DRLAgent(env = env_train)
TD3_PARAMS = {"batch_size": 100,
"buffer_size": 1000000,
"learning_rate": 0.001}
model_td3 = agent.get_model("td3",model_kwargs = TD3_PARAMS)
trained_td3 = agent.train_model(model=model_td3,
tb_log_name='td3',
total_timesteps=30000)
###Output
Logging to tensorboard_log/td3/td3_1
=================================
begin_total_asset:1000000
end_total_asset:5232441.848437611
Sharpe: 0.8749907118878204
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
----------------------------------
| time/ | |
| episodes | 4 |
| fps | 25 |
| time_elapsed | 445 |
| total timesteps | 11572 |
| train/ | |
| actor_loss | -4.69e+07 |
| critic_loss | 1.08e+13 |
| learning_rate | 0.001 |
| n_updates | 8679 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
----------------------------------
| time/ | |
| episodes | 8 |
| fps | 23 |
| time_elapsed | 985 |
| total timesteps | 23144 |
| train/ | |
| actor_loss | -1.05e+08 |
| critic_loss | 2.77e+13 |
| learning_rate | 0.001 |
| n_updates | 20251 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
###Markdown
TradingAssume that we have $1,000,000 initial capital at 2019-01-01. We use the DDPG model to trade Dow jones 30 stocks.
###Code
trade = data_split(df,'2020-07-01', '2021-07-01')
e_trade_gym = StockPortfolioEnv(df = trade, **env_kwargs)
trade.shape
df_daily_return, df_actions = DRLAgent.DRL_prediction(model=trained_td3,
environment = e_trade_gym)
df_daily_return.head()
df_daily_return.to_csv('df_daily_return.csv')
df_actions.head()
df_actions.to_csv('df_actions.csv')
###Output
_____no_output_____
###Markdown
Part 7: Backtest Our StrategyBacktesting plays a key role in evaluating the performance of a trading strategy. Automated backtesting tool is preferred because it reduces the human error. We usually use the Quantopian pyfolio package to backtest our trading strategies. It is easy to use and consists of various individual plots that provide a comprehensive image of the performance of a trading strategy. 7.1 BackTestStatspass in df_account_value, this information is stored in env class
###Code
from pyfolio import timeseries
DRL_strat = convert_daily_return_to_pyfolio_ts(df_daily_return)
perf_func = timeseries.perf_stats
perf_stats_all = perf_func( returns=DRL_strat,
factor_returns=DRL_strat,
positions=None, transactions=None, turnover_denom="AGB")
print("==============DRL Strategy Stats===========")
perf_stats_all
#baseline stats
print("==============Get Baseline Stats===========")
baseline_df = get_baseline(
ticker="^DJI",
start = df_daily_return.loc[0,'date'],
end = df_daily_return.loc[len(df_daily_return)-1,'date'])
stats = backtest_stats(baseline_df, value_col_name = 'close')
###Output
==============Get Baseline Stats===========
[*********************100%***********************] 1 of 1 completed
Shape of DataFrame: (251, 8)
Annual return 0.334042
Cumulative returns 0.332517
Annual volatility 0.146033
Sharpe ratio 2.055458
Calmar ratio 3.740347
Stability 0.945402
Max drawdown -0.089308
Omega ratio 1.408111
Sortino ratio 3.075978
Skew NaN
Kurtosis NaN
Tail ratio 1.078766
Daily value at risk -0.017207
dtype: float64
###Markdown
7.2 BackTestPlot
###Code
import pyfolio
%matplotlib inline
baseline_df = get_baseline(
ticker='^DJI', start=df_daily_return.loc[0,'date'], end='2021-07-01'
)
baseline_returns = get_daily_return(baseline_df, value_col_name="close")
with pyfolio.plotting.plotting_context(font_scale=1.1):
pyfolio.create_full_tear_sheet(returns = DRL_strat,
benchmark_rets=baseline_returns, set_context=False)
###Output
[*********************100%***********************] 1 of 1 completed
Shape of DataFrame: (252, 8)
###Markdown
Min-Variance Portfolio Allocation
###Code
!pip install PyPortfolioOpt
from pypfopt.efficient_frontier import EfficientFrontier
from pypfopt import risk_models
unique_tic = trade.tic.unique()
unique_trade_date = trade.date.unique()
df.head()
#calculate_portfolio_minimum_variance
portfolio = pd.DataFrame(index = range(1), columns = unique_trade_date)
initial_capital = 1000000
portfolio.loc[0,unique_trade_date[0]] = initial_capital
for i in range(len( unique_trade_date)-1):
df_temp = df[df.date==unique_trade_date[i]].reset_index(drop=True)
df_temp_next = df[df.date==unique_trade_date[i+1]].reset_index(drop=True)
#Sigma = risk_models.sample_cov(df_temp.return_list[0])
#calculate covariance matrix
Sigma = df_temp.return_list[0].cov()
#portfolio allocation
ef_min_var = EfficientFrontier(None, Sigma,weight_bounds=(0, 0.1))
#minimum variance
raw_weights_min_var = ef_min_var.min_volatility()
#get weights
cleaned_weights_min_var = ef_min_var.clean_weights()
#current capital
cap = portfolio.iloc[0, i]
#current cash invested for each stock
current_cash = [element * cap for element in list(cleaned_weights_min_var.values())]
# current held shares
current_shares = list(np.array(current_cash)
/ np.array(df_temp.close))
# next time period price
next_price = np.array(df_temp_next.close)
##next_price * current share to calculate next total account value
portfolio.iloc[0, i+1] = np.dot(current_shares, next_price)
portfolio=portfolio.T
portfolio.columns = ['account_value']
portfolio.head()
time_ind = pd.Series(df_daily_return.date)
td3_cumpod =(df_daily_return.daily_return+1).cumprod()-1
min_var_cumpod =(portfolio.account_value.pct_change()+1).cumprod()-1
dji_cumpod =(baseline_returns+1).cumprod()-1
###Output
_____no_output_____
###Markdown
Plotly: DRL, Min-Variance, DJIA
###Code
from datetime import datetime as dt
import matplotlib.pyplot as plt
import plotly
import plotly.graph_objs as go
trace0_portfolio = go.Scatter(x = time_ind, y = td3_cumpod, mode = 'lines', name = 'TD3 (Portfolio Allocation)')
trace1_portfolio = go.Scatter(x = time_ind, y = dji_cumpod, mode = 'lines', name = 'DJIA')
trace2_portfolio = go.Scatter(x = time_ind, y = min_var_cumpod, mode = 'lines', name = 'Min-Variance')
#trace3_portfolio = go.Scatter(x = time_ind, y = ddpg_cumpod, mode = 'lines', name = 'DDPG')
#trace4_portfolio = go.Scatter(x = time_ind, y = addpg_cumpod, mode = 'lines', name = 'Adaptive-DDPG')
#trace5_portfolio = go.Scatter(x = time_ind, y = min_cumpod, mode = 'lines', name = 'Min-Variance')
#trace4 = go.Scatter(x = time_ind, y = addpg_cumpod, mode = 'lines', name = 'Adaptive-DDPG')
#trace2 = go.Scatter(x = time_ind, y = portfolio_cost_minv, mode = 'lines', name = 'Min-Variance')
#trace3 = go.Scatter(x = time_ind, y = spx_value, mode = 'lines', name = 'SPX')
fig = go.Figure()
fig.add_trace(trace0_portfolio)
fig.add_trace(trace1_portfolio)
fig.add_trace(trace2_portfolio)
fig.update_layout(
legend=dict(
x=0,
y=1,
traceorder="normal",
font=dict(
family="sans-serif",
size=15,
color="black"
),
bgcolor="White",
bordercolor="white",
borderwidth=2
),
)
#fig.update_layout(legend_orientation="h")
fig.update_layout(title={
#'text': "Cumulative Return using FinRL",
'y':0.85,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'})
#with Transaction cost
#fig.update_layout(title = 'Quarterly Trade Date')
fig.update_layout(
# margin=dict(l=20, r=20, t=20, b=20),
paper_bgcolor='rgba(1,1,0,0)',
plot_bgcolor='rgba(1, 1, 0, 0)',
#xaxis_title="Date",
yaxis_title="Cumulative Return",
xaxis={'type': 'date',
'tick0': time_ind[0],
'tickmode': 'linear',
'dtick': 86400000.0 *80}
)
fig.update_xaxes(showline=True,linecolor='black',showgrid=True, gridwidth=1, gridcolor='LightSteelBlue',mirror=True)
fig.update_yaxes(showline=True,linecolor='black',showgrid=True, gridwidth=1, gridcolor='LightSteelBlue',mirror=True)
fig.update_yaxes(zeroline=True, zerolinewidth=1, zerolinecolor='LightSteelBlue')
fig.show()
###Output
_____no_output_____
###Markdown
Deep Reinforcement Learning for Stock Trading from Scratch: Portfolio AllocationTutorials to use OpenAI DRL to perform portfolio allocation in one Jupyter Notebook | Presented at NeurIPS 2020: Deep RL Workshop* This blog is based on our paper: FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance, presented at NeurIPS 2020: Deep RL Workshop.* Check out medium blog for detailed explanations: * Please report any issues to our Github: https://github.com/AI4Finance-LLC/FinRL-Library/issues* **Pytorch Version** Content * [1. Problem Definition](0)* [2. Getting Started - Load Python packages](1) * [2.1. Install Packages](1.1) * [2.2. Check Additional Packages](1.2) * [2.3. Import Packages](1.3) * [2.4. Create Folders](1.4)* [3. Download Data](2)* [4. Preprocess Data](3) * [4.1. Technical Indicators](3.1) * [4.2. Perform Feature Engineering](3.2)* [5.Build Environment](4) * [5.1. Training & Trade Data Split](4.1) * [5.2. User-defined Environment](4.2) * [5.3. Initialize Environment](4.3) * [6.Implement DRL Algorithms](5) * [7.Backtesting Performance](6) * [7.1. BackTestStats](6.1) * [7.2. BackTestPlot](6.2) * [7.3. Baseline Stats](6.3) * [7.3. Compare to Stock Market Index](6.4) Part 1. Problem Definition This problem is to design an automated trading solution for single stock trading. We model the stock trading process as a Markov Decision Process (MDP). We then formulate our trading goal as a maximization problem.The algorithm is trained using Deep Reinforcement Learning (DRL) algorithms and the components of the reinforcement learning environment are:* Action: The action space describes the allowed actions that the agent interacts with theenvironment. Normally, a ∈ A includes three actions: a ∈ {−1, 0, 1}, where −1, 0, 1 representselling, holding, and buying one stock. Also, an action can be carried upon multiple shares. We usean action space {−k, ..., −1, 0, 1, ..., k}, where k denotes the number of shares. For example, "Buy10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or −10, respectively* Reward function: r(s, a, s′) is the incentive mechanism for an agent to learn a better action. The change of the portfolio value when action a is taken at state s and arriving at new state s', i.e., r(s, a, s′) = v′ − v, where v′ and v represent the portfoliovalues at state s′ and s, respectively* State: The state space describes the observations that the agent receives from the environment. Just as a human trader needs to analyze various information before executing a trade, soour trading agent observes many different features to better learn in an interactive environment.* Environment: Dow 30 consituentsThe data of the single stock that we will be using for this case study is obtained from Yahoo Finance API. The data contains Open-High-Low-Close price and volume. Part 2. Getting Started- Load Python Packages 2.1. Install all the packages through FinRL library
###Code
## install finrl library
!pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
###Output
Collecting git+https://github.com/AI4Finance-LLC/FinRL-Library.git
Cloning https://github.com/AI4Finance-LLC/FinRL-Library.git to /tmp/pip-req-build-4ebj8idf
Running command git clone -q https://github.com/AI4Finance-LLC/FinRL-Library.git /tmp/pip-req-build-4ebj8idf
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (1.19.5)
Requirement already satisfied: pandas>=1.1.5 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (1.1.5)
Collecting stockstats
Downloading https://files.pythonhosted.org/packages/32/41/d3828c5bc0a262cb3112a4024108a3b019c183fa3b3078bff34bf25abf91/stockstats-0.3.2-py2.py3-none-any.whl
Collecting yfinance
Downloading https://files.pythonhosted.org/packages/5e/4e/88d31f5509edcbc51bcbb7eeae72516b17ada1bc2ad5b496e2d05d62c696/yfinance-0.1.60.tar.gz
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (3.2.2)
Requirement already satisfied: scikit-learn>=0.21.0 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (0.22.2.post1)
Requirement already satisfied: gym>=0.17 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (0.17.3)
Collecting stable-baselines3[extra]
[?25l Downloading https://files.pythonhosted.org/packages/c2/ce/5002282b316703191b9e7f7f1be03670f6a0d5e88181366e73d98d630f59/stable_baselines3-1.1.0-py3-none-any.whl (172kB)
[K |████████████████████████████████| 174kB 8.6MB/s
[?25hRequirement already satisfied: pytest in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (3.6.4)
Requirement already satisfied: setuptools>=41.4.0 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (57.0.0)
Requirement already satisfied: wheel>=0.33.6 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.0) (0.36.2)
Collecting pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2
Cloning https://github.com/quantopian/pyfolio.git to /tmp/pip-install-str4s854/pyfolio
Running command git clone -q https://github.com/quantopian/pyfolio.git /tmp/pip-install-str4s854/pyfolio
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=1.1.5->finrl==0.3.0) (2018.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=1.1.5->finrl==0.3.0) (2.8.1)
Collecting int-date>=0.1.7
Downloading https://files.pythonhosted.org/packages/43/27/31803df15173ab341fe7548c14154b54227dfd8f630daa09a1c6e7db52f7/int_date-0.1.8-py2.py3-none-any.whl
Requirement already satisfied: requests>=2.20 in /usr/local/lib/python3.7/dist-packages (from yfinance->finrl==0.3.0) (2.23.0)
Requirement already satisfied: multitasking>=0.0.7 in /usr/local/lib/python3.7/dist-packages (from yfinance->finrl==0.3.0) (0.0.9)
Collecting lxml>=4.5.1
[?25l Downloading https://files.pythonhosted.org/packages/30/c0/d0526314971fc661b083ab135747dc68446a3022686da8c16d25fcf6ef07/lxml-4.6.3-cp37-cp37m-manylinux2014_x86_64.whl (6.3MB)
[K |████████████████████████████████| 6.3MB 29.9MB/s
[?25hRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.3.0) (1.3.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.3.0) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.3.0) (2.4.7)
Requirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.0->finrl==0.3.0) (1.4.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.0->finrl==0.3.0) (1.0.1)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym>=0.17->finrl==0.3.0) (1.3.0)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym>=0.17->finrl==0.3.0) (1.5.0)
Requirement already satisfied: torch>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.0) (1.9.0+cu102)
Requirement already satisfied: psutil; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.0) (5.4.8)
Requirement already satisfied: pillow; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.0) (7.1.2)
Requirement already satisfied: tensorboard>=2.2.0; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.0) (2.5.0)
Requirement already satisfied: opencv-python; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.0) (4.1.2.30)
Requirement already satisfied: atari-py~=0.2.0; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.0) (0.2.9)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.0) (1.15.0)
Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.0) (1.10.0)
Requirement already satisfied: pluggy<0.8,>=0.5 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.0) (0.7.1)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.0) (21.2.0)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.0) (8.8.0)
Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.0) (1.4.0)
Requirement already satisfied: ipython>=3.2.3 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (5.5.0)
Requirement already satisfied: seaborn>=0.7.1 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.11.1)
Collecting empyrical>=0.5.0
[?25l Downloading https://files.pythonhosted.org/packages/74/43/1b997c21411c6ab7c96dc034e160198272c7a785aeea7654c9bcf98bec83/empyrical-0.5.5.tar.gz (52kB)
[K |████████████████████████████████| 61kB 7.7MB/s
[?25hRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.3.0) (2021.5.30)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.3.0) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.3.0) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.3.0) (2.10)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym>=0.17->finrl==0.3.0) (0.16.0)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch>=1.4.0->stable-baselines3[extra]->finrl==0.3.0) (3.7.4.3)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (0.6.1)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (1.34.1)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (1.8.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (1.31.0)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (0.12.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (1.0.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (3.3.4)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (3.12.4)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (0.4.4)
Requirement already satisfied: pygments in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (2.6.1)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.7.5)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (5.0.5)
Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (4.8.0)
Requirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (4.4.2)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.8.1)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (1.0.18)
Requirement already satisfied: pandas-datareader>=0.2 in /usr/local/lib/python3.7/dist-packages (from empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.9.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (0.2.8)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3.6" in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (4.7.2)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (4.2.2)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (4.5.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (1.3.0)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.7/dist-packages (from traitlets>=4.2->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.2.0)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.7/dist-packages (from pexpect; sys_platform != "win32"->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.7.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.0) (0.2.5)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (0.4.8)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (3.4.1)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.2.0; extra == "extra"->stable-baselines3[extra]->finrl==0.3.0) (3.1.1)
Building wheels for collected packages: finrl, yfinance, pyfolio, empyrical
Building wheel for finrl (setup.py) ... [?25l[?25hdone
Created wheel for finrl: filename=finrl-0.3.0-cp37-none-any.whl size=39029 sha256=b5e0e12e95b4121b93cd651c6235b56c1f0036b8cd5fe4282ed341b89b704b71
Stored in directory: /tmp/pip-ephem-wheel-cache-81fatlyd/wheels/9c/19/bf/c644def96612df1ad42c94d5304966797eaa3221dffc5efe0b
Building wheel for yfinance (setup.py) ... [?25l[?25hdone
Created wheel for yfinance: filename=yfinance-0.1.60-py2.py3-none-any.whl size=23819 sha256=d46d05a6ce39a7748caf23509013c8d4f9dbd94a5df0367083a19f5756645a42
Stored in directory: /root/.cache/pip/wheels/f0/be/a4/846f02c5985562250917b0ab7b33fff737c8e6e8cd5209aa3b
Building wheel for pyfolio (setup.py) ... [?25l[?25hdone
Created wheel for pyfolio: filename=pyfolio-0.9.2+75.g4b901f6-cp37-none-any.whl size=75776 sha256=11761329471b373c2c50b45eb52816dfdf975ce01142cdb4f4a774fab19b7a13
Stored in directory: /tmp/pip-ephem-wheel-cache-81fatlyd/wheels/43/ce/d9/6752fb6e03205408773235435205a0519d2c608a94f1976e56
Building wheel for empyrical (setup.py) ... [?25l[?25hdone
Created wheel for empyrical: filename=empyrical-0.5.5-cp37-none-any.whl size=39780 sha256=2b4515c7d9f959d16244e3b5a89dad1452b2bbc5e8b309d5e43e128f2e87a888
Stored in directory: /root/.cache/pip/wheels/ea/b2/c8/6769d8444d2f2e608fae2641833110668d0ffd1abeb2e9f3fc
Successfully built finrl yfinance pyfolio empyrical
Installing collected packages: int-date, stockstats, lxml, yfinance, stable-baselines3, empyrical, pyfolio, finrl
Found existing installation: lxml 4.2.6
Uninstalling lxml-4.2.6:
Successfully uninstalled lxml-4.2.6
Successfully installed empyrical-0.5.5 finrl-0.3.0 int-date-0.1.8 lxml-4.6.3 pyfolio-0.9.2+75.g4b901f6 stable-baselines3-1.1.0 stockstats-0.3.2 yfinance-0.1.60
###Markdown
2.2. Check if the additional packages needed are present, if not install them. * Yahoo Finance API* pandas* numpy* matplotlib* stockstats* OpenAI gym* stable-baselines* tensorflow* pyfolio 2.3. Import Packages
###Code
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.use('Agg')
import datetime
from finrl.config import config
from finrl.marketdata.yahoodownloader import YahooDownloader
from finrl.preprocessing.preprocessors import FeatureEngineer
from finrl.preprocessing.data import data_split
from finrl.env.env_portfolio import StockPortfolioEnv
from finrl.model.models import DRLAgent
from finrl.trade.backtest import backtest_stats, backtest_plot, get_daily_return, get_baseline,convert_daily_return_to_pyfolio_ts
import sys
sys.path.append("../FinRL-Library")
###Output
/usr/local/lib/python3.7/dist-packages/pyfolio/pos.py:27: UserWarning: Module "zipline.assets" not found; multipliers will not be applied to position notionals.
'Module "zipline.assets" not found; multipliers will not be applied'
###Markdown
2.4. Create Folders
###Code
import os
if not os.path.exists("./" + config.DATA_SAVE_DIR):
os.makedirs("./" + config.DATA_SAVE_DIR)
if not os.path.exists("./" + config.TRAINED_MODEL_DIR):
os.makedirs("./" + config.TRAINED_MODEL_DIR)
if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR):
os.makedirs("./" + config.TENSORBOARD_LOG_DIR)
if not os.path.exists("./" + config.RESULTS_DIR):
os.makedirs("./" + config.RESULTS_DIR)
###Output
_____no_output_____
###Markdown
Part 3. Download DataYahoo Finance is a website that provides stock data, financial news, financial reports, etc. All the data provided by Yahoo Finance is free.* FinRL uses a class **YahooDownloader** to fetch data from Yahoo Finance API* Call Limit: Using the Public API (without authentication), you are limited to 2,000 requests per hour per IP (or up to a total of 48,000 requests a day).
###Code
print(config.DOW_30_TICKER)
df = YahooDownloader(start_date = '2008-01-01',
end_date = '2021-07-01',
ticker_list = config.DOW_30_TICKER).fetch_data()
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Part 4: Preprocess DataData preprocessing is a crucial step for training a high quality machine learning model. We need to check for missing data and do feature engineering in order to convert the data into a model-ready state.* Add technical indicators. In practical trading, various information needs to be taken into account, for example the historical stock prices, current holding shares, technical indicators, etc. In this article, we demonstrate two trend-following technical indicators: MACD and RSI.* Add turbulence index. Risk-aversion reflects whether an investor will choose to preserve the capital. It also influences one's trading strategy when facing different market volatility level. To control the risk in a worst-case scenario, such as financial crisis of 2007–2008, FinRL employs the financial turbulence index that measures extreme asset price fluctuation.
###Code
fe = FeatureEngineer(
use_technical_indicator=True,
use_turbulence=False,
user_defined_feature = False)
df = fe.preprocess_data(df)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Add covariance matrix as states
###Code
# add covariance matrix as states
df=df.sort_values(['date','tic'],ignore_index=True)
df.index = df.date.factorize()[0]
cov_list = []
return_list = []
# look back is one year
lookback=252
for i in range(lookback,len(df.index.unique())):
data_lookback = df.loc[i-lookback:i,:]
price_lookback=data_lookback.pivot_table(index = 'date',columns = 'tic', values = 'close')
return_lookback = price_lookback.pct_change().dropna()
return_list.append(return_lookback)
covs = return_lookback.cov().values
cov_list.append(covs)
df_cov = pd.DataFrame({'date':df.date.unique()[lookback:],'cov_list':cov_list,'return_list':return_list})
df = df.merge(df_cov, on='date')
df = df.sort_values(['date','tic']).reset_index(drop=True)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Part 5. Design EnvironmentConsidering the stochastic and interactive nature of the automated stock trading tasks, a financial task is modeled as a **Markov Decision Process (MDP)** problem. The training process involves observing stock price change, taking an action and reward's calculation to have the agent adjusting its strategy accordingly. By interacting with the environment, the trading agent will derive a trading strategy with the maximized rewards as time proceeds.Our trading environments, based on OpenAI Gym framework, simulate live stock markets with real market data according to the principle of time-driven simulation.The action space describes the allowed actions that the agent interacts with the environment. Normally, action a includes three actions: {-1, 0, 1}, where -1, 0, 1 represent selling, holding, and buying one share. Also, an action can be carried upon multiple shares. We use an action space {-k,…,-1, 0, 1, …, k}, where k denotes the number of shares to buy and -k denotes the number of shares to sell. For example, "Buy 10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or -10, respectively. The continuous action space needs to be normalized to [-1, 1], since the policy is defined on a Gaussian distribution, which needs to be normalized and symmetric. Training data split: 2009-01-01 to 2018-12-31
###Code
train = data_split(df, '2009-01-01','2020-07-01')
#trade = data_split(df, '2020-01-01', config.END_DATE)
train.head()
###Output
_____no_output_____
###Markdown
Environment for Portfolio Allocation
###Code
import numpy as np
import pandas as pd
from gym.utils import seeding
import gym
from gym import spaces
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from stable_baselines3.common.vec_env import DummyVecEnv
class StockPortfolioEnv(gym.Env):
"""A single stock trading environment for OpenAI gym
Attributes
----------
df: DataFrame
input data
stock_dim : int
number of unique stocks
hmax : int
maximum number of shares to trade
initial_amount : int
start money
transaction_cost_pct: float
transaction cost percentage per trade
reward_scaling: float
scaling factor for reward, good for training
state_space: int
the dimension of input features
action_space: int
equals stock dimension
tech_indicator_list: list
a list of technical indicator names
turbulence_threshold: int
a threshold to control risk aversion
day: int
an increment number to control date
Methods
-------
_sell_stock()
perform sell action based on the sign of the action
_buy_stock()
perform buy action based on the sign of the action
step()
at each step the agent will return actions, then
we will calculate the reward, and return the next observation.
reset()
reset the environment
render()
use render to return other functions
save_asset_memory()
return account value at each time step
save_action_memory()
return actions/positions at each time step
"""
metadata = {'render.modes': ['human']}
def __init__(self,
df,
stock_dim,
hmax,
initial_amount,
transaction_cost_pct,
reward_scaling,
state_space,
action_space,
tech_indicator_list,
turbulence_threshold=None,
lookback=252,
day = 0):
#super(StockEnv, self).__init__()
#money = 10 , scope = 1
self.day = day
self.lookback=lookback
self.df = df
self.stock_dim = stock_dim
self.hmax = hmax
self.initial_amount = initial_amount
self.transaction_cost_pct =transaction_cost_pct
self.reward_scaling = reward_scaling
self.state_space = state_space
self.action_space = action_space
self.tech_indicator_list = tech_indicator_list
# action_space normalization and shape is self.stock_dim
self.action_space = spaces.Box(low = 0, high = 1,shape = (self.action_space,))
# Shape = (34, 30)
# covariance matrix + technical indicators
self.observation_space = spaces.Box(low=-np.inf, high=np.inf, shape = (self.state_space+len(self.tech_indicator_list),self.state_space))
# load data from a pandas dataframe
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.terminal = False
self.turbulence_threshold = turbulence_threshold
# initalize state: inital portfolio return + individual stock return + individual weights
self.portfolio_value = self.initial_amount
# memorize portfolio value each step
self.asset_memory = [self.initial_amount]
# memorize portfolio return each step
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
def step(self, actions):
# print(self.day)
self.terminal = self.day >= len(self.df.index.unique())-1
# print(actions)
if self.terminal:
df = pd.DataFrame(self.portfolio_return_memory)
df.columns = ['daily_return']
plt.plot(df.daily_return.cumsum(),'r')
plt.savefig('results/cumulative_reward.png')
plt.close()
plt.plot(self.portfolio_return_memory,'r')
plt.savefig('results/rewards.png')
plt.close()
print("=================================")
print("begin_total_asset:{}".format(self.asset_memory[0]))
print("end_total_asset:{}".format(self.portfolio_value))
df_daily_return = pd.DataFrame(self.portfolio_return_memory)
df_daily_return.columns = ['daily_return']
if df_daily_return['daily_return'].std() !=0:
sharpe = (252**0.5)*df_daily_return['daily_return'].mean()/ \
df_daily_return['daily_return'].std()
print("Sharpe: ",sharpe)
print("=================================")
return self.state, self.reward, self.terminal,{}
else:
#print("Model actions: ",actions)
# actions are the portfolio weight
# normalize to sum of 1
#if (np.array(actions) - np.array(actions).min()).sum() != 0:
# norm_actions = (np.array(actions) - np.array(actions).min()) / (np.array(actions) - np.array(actions).min()).sum()
#else:
# norm_actions = actions
weights = self.softmax_normalization(actions)
#print("Normalized actions: ", weights)
self.actions_memory.append(weights)
last_day_memory = self.data
#load next state
self.day += 1
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
#print(self.state)
# calcualte portfolio return
# individual stocks' return * weight
portfolio_return = sum(((self.data.close.values / last_day_memory.close.values)-1)*weights)
# update portfolio value
new_portfolio_value = self.portfolio_value*(1+portfolio_return)
self.portfolio_value = new_portfolio_value
# save into memory
self.portfolio_return_memory.append(portfolio_return)
self.date_memory.append(self.data.date.unique()[0])
self.asset_memory.append(new_portfolio_value)
# the reward is the new portfolio value or end portfolo value
self.reward = new_portfolio_value
#print("Step reward: ", self.reward)
#self.reward = self.reward*self.reward_scaling
return self.state, self.reward, self.terminal, {}
def reset(self):
self.asset_memory = [self.initial_amount]
self.day = 0
self.data = self.df.loc[self.day,:]
# load states
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.portfolio_value = self.initial_amount
#self.cost = 0
#self.trades = 0
self.terminal = False
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
return self.state
def render(self, mode='human'):
return self.state
def softmax_normalization(self, actions):
numerator = np.exp(actions)
denominator = np.sum(np.exp(actions))
softmax_output = numerator/denominator
return softmax_output
def save_asset_memory(self):
date_list = self.date_memory
portfolio_return = self.portfolio_return_memory
#print(len(date_list))
#print(len(asset_list))
df_account_value = pd.DataFrame({'date':date_list,'daily_return':portfolio_return})
return df_account_value
def save_action_memory(self):
# date and close price length must match actions length
date_list = self.date_memory
df_date = pd.DataFrame(date_list)
df_date.columns = ['date']
action_list = self.actions_memory
df_actions = pd.DataFrame(action_list)
df_actions.columns = self.data.tic.values
df_actions.index = df_date.date
#df_actions = pd.DataFrame({'date':date_list,'actions':action_list})
return df_actions
def _seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def get_sb_env(self):
e = DummyVecEnv([lambda: self])
obs = e.reset()
return e, obs
stock_dimension = len(train.tic.unique())
state_space = stock_dimension
print(f"Stock Dimension: {stock_dimension}, State Space: {state_space}")
env_kwargs = {
"hmax": 100,
"initial_amount": 1000000,
"transaction_cost_pct": 0.001,
"state_space": state_space,
"stock_dim": stock_dimension,
"tech_indicator_list": config.TECHNICAL_INDICATORS_LIST,
"action_space": stock_dimension,
"reward_scaling": 1e-4
}
e_train_gym = StockPortfolioEnv(df = train, **env_kwargs)
env_train, _ = e_train_gym.get_sb_env()
print(type(env_train))
###Output
<class 'stable_baselines3.common.vec_env.dummy_vec_env.DummyVecEnv'>
###Markdown
Part 6: Implement DRL Algorithms* The implementation of the DRL algorithms are based on **OpenAI Baselines** and **Stable Baselines**. Stable Baselines is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups.* FinRL library includes fine-tuned standard DRL algorithms, such as DQN, DDPG,Multi-Agent DDPG, PPO, SAC, A2C and TD3. We also allow users todesign their own DRL algorithms by adapting these DRL algorithms.
###Code
# initialize
agent = DRLAgent(env = env_train)
###Output
_____no_output_____
###Markdown
Model 1: **A2C**
###Code
agent = DRLAgent(env = env_train)
A2C_PARAMS = {"n_steps": 5, "ent_coef": 0.005, "learning_rate": 0.0002}
model_a2c = agent.get_model(model_name="a2c",model_kwargs = A2C_PARAMS)
trained_a2c = agent.train_model(model=model_a2c,
tb_log_name='a2c',
total_timesteps=50000)
###Output
Logging to tensorboard_log/a2c/a2c_2
------------------------------------
| time/ | |
| fps | 352 |
| iterations | 100 |
| time_elapsed | 1 |
| total_timesteps | 500 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 12099 |
| policy_loss | 2.01e+08 |
| std | 0.959 |
| value_loss | 2.64e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 350 |
| iterations | 200 |
| time_elapsed | 2 |
| total_timesteps | 1000 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 12199 |
| policy_loss | 2.37e+08 |
| std | 0.958 |
| value_loss | 4.39e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 349 |
| iterations | 300 |
| time_elapsed | 4 |
| total_timesteps | 1500 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 12299 |
| policy_loss | 3.76e+08 |
| std | 0.958 |
| value_loss | 1.01e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 349 |
| iterations | 400 |
| time_elapsed | 5 |
| total_timesteps | 2000 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 12399 |
| policy_loss | 4.06e+08 |
| std | 0.958 |
| value_loss | 1.33e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 349 |
| iterations | 500 |
| time_elapsed | 7 |
| total_timesteps | 2500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 12499 |
| policy_loss | 5.86e+08 |
| std | 0.957 |
| value_loss | 2.7e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4685300.195654661
Sharpe: 1.0453114515340531
=================================
-------------------------------------
| time/ | |
| fps | 340 |
| iterations | 600 |
| time_elapsed | 8 |
| total_timesteps | 3000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 12599 |
| policy_loss | 1.81e+08 |
| std | 0.956 |
| value_loss | 2.19e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 341 |
| iterations | 700 |
| time_elapsed | 10 |
| total_timesteps | 3500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 12699 |
| policy_loss | 2.12e+08 |
| std | 0.956 |
| value_loss | 3.4e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 342 |
| iterations | 800 |
| time_elapsed | 11 |
| total_timesteps | 4000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 12799 |
| policy_loss | 3.43e+08 |
| std | 0.956 |
| value_loss | 8.32e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 342 |
| iterations | 900 |
| time_elapsed | 13 |
| total_timesteps | 4500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 12899 |
| policy_loss | 3.89e+08 |
| std | 0.955 |
| value_loss | 1.1e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 343 |
| iterations | 1000 |
| time_elapsed | 14 |
| total_timesteps | 5000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 12999 |
| policy_loss | 4.89e+08 |
| std | 0.955 |
| value_loss | 2.13e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4211670.620824253
Sharpe: 0.9836152322815558
=================================
------------------------------------
| time/ | |
| fps | 339 |
| iterations | 1100 |
| time_elapsed | 16 |
| total_timesteps | 5500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 13099 |
| policy_loss | 1.73e+08 |
| std | 0.954 |
| value_loss | 2.48e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 340 |
| iterations | 1200 |
| time_elapsed | 17 |
| total_timesteps | 6000 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 13199 |
| policy_loss | 2.48e+08 |
| std | 0.954 |
| value_loss | 4.39e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 340 |
| iterations | 1300 |
| time_elapsed | 19 |
| total_timesteps | 6500 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 13299 |
| policy_loss | 3.53e+08 |
| std | 0.953 |
| value_loss | 9.04e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 340 |
| iterations | 1400 |
| time_elapsed | 20 |
| total_timesteps | 7000 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 13399 |
| policy_loss | 3.96e+08 |
| std | 0.953 |
| value_loss | 1.21e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 341 |
| iterations | 1500 |
| time_elapsed | 21 |
| total_timesteps | 7500 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 13499 |
| policy_loss | 5.96e+08 |
| std | 0.953 |
| value_loss | 2.56e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4639828.316566328
Sharpe: 1.0428808028309948
=================================
------------------------------------
| time/ | |
| fps | 338 |
| iterations | 1600 |
| time_elapsed | 23 |
| total_timesteps | 8000 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 13599 |
| policy_loss | 1.93e+08 |
| std | 0.952 |
| value_loss | 2.53e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 339 |
| iterations | 1700 |
| time_elapsed | 25 |
| total_timesteps | 8500 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 13699 |
| policy_loss | 2.44e+08 |
| std | 0.952 |
| value_loss | 4.34e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 339 |
| iterations | 1800 |
| time_elapsed | 26 |
| total_timesteps | 9000 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 1.79e-07 |
| learning_rate | 0.0002 |
| n_updates | 13799 |
| policy_loss | 3.55e+08 |
| std | 0.952 |
| value_loss | 9.29e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 340 |
| iterations | 1900 |
| time_elapsed | 27 |
| total_timesteps | 9500 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 13899 |
| policy_loss | 4.14e+08 |
| std | 0.952 |
| value_loss | 1.31e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 340 |
| iterations | 2000 |
| time_elapsed | 29 |
| total_timesteps | 10000 |
| train/ | |
| entropy_loss | -41.1 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 13999 |
| policy_loss | 6.22e+08 |
| std | 0.951 |
| value_loss | 2.87e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4775229.061094982
Sharpe: 1.0650992139820405
=================================
------------------------------------
| time/ | |
| fps | 338 |
| iterations | 2100 |
| time_elapsed | 31 |
| total_timesteps | 10500 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 14099 |
| policy_loss | 1.82e+08 |
| std | 0.951 |
| value_loss | 2.24e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 338 |
| iterations | 2200 |
| time_elapsed | 32 |
| total_timesteps | 11000 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 14199 |
| policy_loss | 2.38e+08 |
| std | 0.95 |
| value_loss | 4.4e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 338 |
| iterations | 2300 |
| time_elapsed | 33 |
| total_timesteps | 11500 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 14299 |
| policy_loss | 3.63e+08 |
| std | 0.949 |
| value_loss | 9.98e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 339 |
| iterations | 2400 |
| time_elapsed | 35 |
| total_timesteps | 12000 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 14399 |
| policy_loss | 4.3e+08 |
| std | 0.948 |
| value_loss | 1.35e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 339 |
| iterations | 2500 |
| time_elapsed | 36 |
| total_timesteps | 12500 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 14499 |
| policy_loss | 6.25e+08 |
| std | 0.948 |
| value_loss | 2.75e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4678739.328824231
Sharpe: 1.0465241688702438
=================================
------------------------------------
| time/ | |
| fps | 338 |
| iterations | 2600 |
| time_elapsed | 38 |
| total_timesteps | 13000 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 14599 |
| policy_loss | 1.82e+08 |
| std | 0.948 |
| value_loss | 1.89e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 338 |
| iterations | 2700 |
| time_elapsed | 39 |
| total_timesteps | 13500 |
| train/ | |
| entropy_loss | -41 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 14699 |
| policy_loss | 2.21e+08 |
| std | 0.948 |
| value_loss | 3.95e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 338 |
| iterations | 2800 |
| time_elapsed | 41 |
| total_timesteps | 14000 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 14799 |
| policy_loss | 3.3e+08 |
| std | 0.948 |
| value_loss | 8.54e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 338 |
| iterations | 2900 |
| time_elapsed | 42 |
| total_timesteps | 14500 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 14899 |
| policy_loss | 4.24e+08 |
| std | 0.947 |
| value_loss | 1.26e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 339 |
| iterations | 3000 |
| time_elapsed | 44 |
| total_timesteps | 15000 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 14999 |
| policy_loss | 5.96e+08 |
| std | 0.947 |
| value_loss | 2.6e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4677079.483218055
Sharpe: 1.043334299291766
=================================
-------------------------------------
| time/ | |
| fps | 336 |
| iterations | 3100 |
| time_elapsed | 46 |
| total_timesteps | 15500 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 15099 |
| policy_loss | 1.66e+08 |
| std | 0.947 |
| value_loss | 2e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 3200 |
| time_elapsed | 47 |
| total_timesteps | 16000 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 15199 |
| policy_loss | 2.31e+08 |
| std | 0.947 |
| value_loss | 3.68e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 337 |
| iterations | 3300 |
| time_elapsed | 48 |
| total_timesteps | 16500 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 15299 |
| policy_loss | 3.32e+08 |
| std | 0.946 |
| value_loss | 8.59e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 337 |
| iterations | 3400 |
| time_elapsed | 50 |
| total_timesteps | 17000 |
| train/ | |
| entropy_loss | -40.9 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 15399 |
| policy_loss | 3.93e+08 |
| std | 0.945 |
| value_loss | 1.15e+14 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 337 |
| iterations | 3500 |
| time_elapsed | 51 |
| total_timesteps | 17500 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 15499 |
| policy_loss | 5.3e+08 |
| std | 0.944 |
| value_loss | 2.09e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4359923.802114374
Sharpe: 1.0008163852772658
=================================
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 3600 |
| time_elapsed | 53 |
| total_timesteps | 18000 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 1.79e-07 |
| learning_rate | 0.0002 |
| n_updates | 15599 |
| policy_loss | 1.7e+08 |
| std | 0.944 |
| value_loss | 1.92e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 3700 |
| time_elapsed | 54 |
| total_timesteps | 18500 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 15699 |
| policy_loss | 2.33e+08 |
| std | 0.943 |
| value_loss | 3.59e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 3800 |
| time_elapsed | 56 |
| total_timesteps | 19000 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 15799 |
| policy_loss | 3.35e+08 |
| std | 0.944 |
| value_loss | 8.35e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 337 |
| iterations | 3900 |
| time_elapsed | 57 |
| total_timesteps | 19500 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 15899 |
| policy_loss | 3.89e+08 |
| std | 0.944 |
| value_loss | 1.06e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 337 |
| iterations | 4000 |
| time_elapsed | 59 |
| total_timesteps | 20000 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 15999 |
| policy_loss | 5.99e+08 |
| std | 0.944 |
| value_loss | 2.18e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4518146.7620793665
Sharpe: 1.017512586785335
=================================
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 4100 |
| time_elapsed | 60 |
| total_timesteps | 20500 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 16099 |
| policy_loss | 1.81e+08 |
| std | 0.943 |
| value_loss | 2.24e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 336 |
| iterations | 4200 |
| time_elapsed | 62 |
| total_timesteps | 21000 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 16199 |
| policy_loss | 2.17e+08 |
| std | 0.943 |
| value_loss | 3.98e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 4300 |
| time_elapsed | 63 |
| total_timesteps | 21500 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 16299 |
| policy_loss | 3.55e+08 |
| std | 0.942 |
| value_loss | 9.99e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 4400 |
| time_elapsed | 65 |
| total_timesteps | 22000 |
| train/ | |
| entropy_loss | -40.8 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 16399 |
| policy_loss | 4.37e+08 |
| std | 0.942 |
| value_loss | 1.35e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 337 |
| iterations | 4500 |
| time_elapsed | 66 |
| total_timesteps | 22500 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 16499 |
| policy_loss | 5.77e+08 |
| std | 0.941 |
| value_loss | 2.56e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4947928.885546611
Sharpe: 1.0770541591532077
=================================
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 4600 |
| time_elapsed | 68 |
| total_timesteps | 23000 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 16599 |
| policy_loss | 1.56e+08 |
| std | 0.94 |
| value_loss | 1.75e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 4700 |
| time_elapsed | 69 |
| total_timesteps | 23500 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 16699 |
| policy_loss | 2.11e+08 |
| std | 0.94 |
| value_loss | 3.38e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 4800 |
| time_elapsed | 71 |
| total_timesteps | 24000 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 16799 |
| policy_loss | 3.25e+08 |
| std | 0.94 |
| value_loss | 8.02e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 4900 |
| time_elapsed | 72 |
| total_timesteps | 24500 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 16899 |
| policy_loss | 4.07e+08 |
| std | 0.94 |
| value_loss | 1.14e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5000 |
| time_elapsed | 74 |
| total_timesteps | 25000 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 16999 |
| policy_loss | 5.45e+08 |
| std | 0.939 |
| value_loss | 2.2e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4708435.905981962
Sharpe: 1.0421275396424545
=================================
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5100 |
| time_elapsed | 75 |
| total_timesteps | 25500 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 17099 |
| policy_loss | 1.74e+08 |
| std | 0.939 |
| value_loss | 2.32e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5200 |
| time_elapsed | 77 |
| total_timesteps | 26000 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 17199 |
| policy_loss | 2.26e+08 |
| std | 0.938 |
| value_loss | 3.94e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5300 |
| time_elapsed | 78 |
| total_timesteps | 26500 |
| train/ | |
| entropy_loss | -40.7 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 17299 |
| policy_loss | 3.16e+08 |
| std | 0.938 |
| value_loss | 7.8e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5400 |
| time_elapsed | 80 |
| total_timesteps | 27000 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 17399 |
| policy_loss | 3.95e+08 |
| std | 0.938 |
| value_loss | 1.14e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5500 |
| time_elapsed | 81 |
| total_timesteps | 27500 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 17499 |
| policy_loss | 6.04e+08 |
| std | 0.937 |
| value_loss | 2.22e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4591802.064526513
Sharpe: 1.0188228298492967
=================================
-------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5600 |
| time_elapsed | 83 |
| total_timesteps | 28000 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 17599 |
| policy_loss | 1.73e+08 |
| std | 0.937 |
| value_loss | 2.22e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5700 |
| time_elapsed | 84 |
| total_timesteps | 28500 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 17699 |
| policy_loss | 2.13e+08 |
| std | 0.937 |
| value_loss | 3.68e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5800 |
| time_elapsed | 86 |
| total_timesteps | 29000 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 17799 |
| policy_loss | 3.15e+08 |
| std | 0.937 |
| value_loss | 7.34e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 5900 |
| time_elapsed | 87 |
| total_timesteps | 29500 |
| train/ | |
| entropy_loss | -40.6 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 17899 |
| policy_loss | 3.56e+08 |
| std | 0.936 |
| value_loss | 1.01e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 6000 |
| time_elapsed | 89 |
| total_timesteps | 30000 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 17999 |
| policy_loss | 5.88e+08 |
| std | 0.935 |
| value_loss | 2.08e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4389104.288134387
Sharpe: 0.9933788463870157
=================================
------------------------------------
| time/ | |
| fps | 335 |
| iterations | 6100 |
| time_elapsed | 90 |
| total_timesteps | 30500 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 18099 |
| policy_loss | 1.72e+08 |
| std | 0.935 |
| value_loss | 2.2e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 336 |
| iterations | 6200 |
| time_elapsed | 92 |
| total_timesteps | 31000 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 18199 |
| policy_loss | 2.32e+08 |
| std | 0.934 |
| value_loss | 3.84e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 336 |
| iterations | 6300 |
| time_elapsed | 93 |
| total_timesteps | 31500 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 18299 |
| policy_loss | 3.14e+08 |
| std | 0.935 |
| value_loss | 7.79e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 6400 |
| time_elapsed | 95 |
| total_timesteps | 32000 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 18399 |
| policy_loss | 3.81e+08 |
| std | 0.934 |
| value_loss | 9.57e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 336 |
| iterations | 6500 |
| time_elapsed | 96 |
| total_timesteps | 32500 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 18499 |
| policy_loss | 5.48e+08 |
| std | 0.933 |
| value_loss | 2.3e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4580263.352082179
Sharpe: 1.0226861102653615
=================================
------------------------------------
| time/ | |
| fps | 335 |
| iterations | 6600 |
| time_elapsed | 98 |
| total_timesteps | 33000 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 18599 |
| policy_loss | 1.57e+08 |
| std | 0.933 |
| value_loss | 1.92e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 335 |
| iterations | 6700 |
| time_elapsed | 99 |
| total_timesteps | 33500 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 18699 |
| policy_loss | 2.32e+08 |
| std | 0.933 |
| value_loss | 3.7e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 335 |
| iterations | 6800 |
| time_elapsed | 101 |
| total_timesteps | 34000 |
| train/ | |
| entropy_loss | -40.5 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 18799 |
| policy_loss | 3.06e+08 |
| std | 0.933 |
| value_loss | 7.37e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 334 |
| iterations | 6900 |
| time_elapsed | 103 |
| total_timesteps | 34500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 18899 |
| policy_loss | 3.57e+08 |
| std | 0.932 |
| value_loss | 9.15e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 334 |
| iterations | 7000 |
| time_elapsed | 104 |
| total_timesteps | 35000 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 18999 |
| policy_loss | 5.49e+08 |
| std | 0.931 |
| value_loss | 2.57e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4583070.081894048
Sharpe: 1.0296700608185065
=================================
------------------------------------
| time/ | |
| fps | 333 |
| iterations | 7100 |
| time_elapsed | 106 |
| total_timesteps | 35500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 19099 |
| policy_loss | 1.68e+08 |
| std | 0.931 |
| value_loss | 1.88e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 333 |
| iterations | 7200 |
| time_elapsed | 107 |
| total_timesteps | 36000 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 19199 |
| policy_loss | 2.09e+08 |
| std | 0.931 |
| value_loss | 3.39e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 333 |
| iterations | 7300 |
| time_elapsed | 109 |
| total_timesteps | 36500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 19299 |
| policy_loss | 3.17e+08 |
| std | 0.931 |
| value_loss | 7.95e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 333 |
| iterations | 7400 |
| time_elapsed | 111 |
| total_timesteps | 37000 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 19399 |
| policy_loss | 3.68e+08 |
| std | 0.931 |
| value_loss | 9.27e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 332 |
| iterations | 7500 |
| time_elapsed | 112 |
| total_timesteps | 37500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 19499 |
| policy_loss | 6.09e+08 |
| std | 0.931 |
| value_loss | 2.31e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4576426.405502999
Sharpe: 1.0235768164756291
=================================
------------------------------------
| time/ | |
| fps | 332 |
| iterations | 7600 |
| time_elapsed | 114 |
| total_timesteps | 38000 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 19599 |
| policy_loss | 1.59e+08 |
| std | 0.931 |
| value_loss | 2.02e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 331 |
| iterations | 7700 |
| time_elapsed | 115 |
| total_timesteps | 38500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 5.96e-08 |
| learning_rate | 0.0002 |
| n_updates | 19699 |
| policy_loss | 2.21e+08 |
| std | 0.93 |
| value_loss | 3.36e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 331 |
| iterations | 7800 |
| time_elapsed | 117 |
| total_timesteps | 39000 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 19799 |
| policy_loss | 3.26e+08 |
| std | 0.93 |
| value_loss | 8.54e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 331 |
| iterations | 7900 |
| time_elapsed | 119 |
| total_timesteps | 39500 |
| train/ | |
| entropy_loss | -40.4 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 19899 |
| policy_loss | 3.73e+08 |
| std | 0.93 |
| value_loss | 1.15e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 331 |
| iterations | 8000 |
| time_elapsed | 120 |
| total_timesteps | 40000 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 19999 |
| policy_loss | 5.89e+08 |
| std | 0.929 |
| value_loss | 2.49e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4940621.780834714
Sharpe: 1.0767272532158483
=================================
------------------------------------
| time/ | |
| fps | 330 |
| iterations | 8100 |
| time_elapsed | 122 |
| total_timesteps | 40500 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 20099 |
| policy_loss | 1.5e+08 |
| std | 0.928 |
| value_loss | 1.82e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 330 |
| iterations | 8200 |
| time_elapsed | 123 |
| total_timesteps | 41000 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 20199 |
| policy_loss | 1.78e+08 |
| std | 0.928 |
| value_loss | 2.61e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 330 |
| iterations | 8300 |
| time_elapsed | 125 |
| total_timesteps | 41500 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 20299 |
| policy_loss | 3.09e+08 |
| std | 0.927 |
| value_loss | 6.16e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 330 |
| iterations | 8400 |
| time_elapsed | 127 |
| total_timesteps | 42000 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 20399 |
| policy_loss | 3.35e+08 |
| std | 0.927 |
| value_loss | 9.63e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 330 |
| iterations | 8500 |
| time_elapsed | 128 |
| total_timesteps | 42500 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 20499 |
| policy_loss | 5.1e+08 |
| std | 0.927 |
| value_loss | 1.7e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4118580.1744499537
Sharpe: 0.9620511561976229
=================================
-------------------------------------
| time/ | |
| fps | 329 |
| iterations | 8600 |
| time_elapsed | 130 |
| total_timesteps | 43000 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 20599 |
| policy_loss | 1.52e+08 |
| std | 0.927 |
| value_loss | 1.83e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 329 |
| iterations | 8700 |
| time_elapsed | 131 |
| total_timesteps | 43500 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | 2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 20699 |
| policy_loss | 2.01e+08 |
| std | 0.927 |
| value_loss | 2.66e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 329 |
| iterations | 8800 |
| time_elapsed | 133 |
| total_timesteps | 44000 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 20799 |
| policy_loss | 2.8e+08 |
| std | 0.927 |
| value_loss | 6.24e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 329 |
| iterations | 8900 |
| time_elapsed | 135 |
| total_timesteps | 44500 |
| train/ | |
| entropy_loss | -40.3 |
| explained_variance | -2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 20899 |
| policy_loss | 3.31e+08 |
| std | 0.926 |
| value_loss | 9.61e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 329 |
| iterations | 9000 |
| time_elapsed | 136 |
| total_timesteps | 45000 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 20999 |
| policy_loss | 4.87e+08 |
| std | 0.926 |
| value_loss | 1.68e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4246562.525827188
Sharpe: 0.9779057432228896
=================================
------------------------------------
| time/ | |
| fps | 329 |
| iterations | 9100 |
| time_elapsed | 138 |
| total_timesteps | 45500 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 21099 |
| policy_loss | 1.54e+08 |
| std | 0.926 |
| value_loss | 1.7e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 329 |
| iterations | 9200 |
| time_elapsed | 139 |
| total_timesteps | 46000 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 21199 |
| policy_loss | 1.94e+08 |
| std | 0.925 |
| value_loss | 2.63e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 329 |
| iterations | 9300 |
| time_elapsed | 141 |
| total_timesteps | 46500 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 21299 |
| policy_loss | 2.97e+08 |
| std | 0.925 |
| value_loss | 6.54e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 329 |
| iterations | 9400 |
| time_elapsed | 142 |
| total_timesteps | 47000 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 21399 |
| policy_loss | 3.59e+08 |
| std | 0.925 |
| value_loss | 9.92e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 329 |
| iterations | 9500 |
| time_elapsed | 144 |
| total_timesteps | 47500 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | -1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 21499 |
| policy_loss | 4.6e+08 |
| std | 0.924 |
| value_loss | 1.94e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4580219.125422531
Sharpe: 1.0290957953577193
=================================
------------------------------------
| time/ | |
| fps | 328 |
| iterations | 9600 |
| time_elapsed | 146 |
| total_timesteps | 48000 |
| train/ | |
| entropy_loss | -40.2 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 21599 |
| policy_loss | 1.49e+08 |
| std | 0.923 |
| value_loss | 1.61e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 328 |
| iterations | 9700 |
| time_elapsed | 147 |
| total_timesteps | 48500 |
| train/ | |
| entropy_loss | -40.1 |
| explained_variance | 2.38e-07 |
| learning_rate | 0.0002 |
| n_updates | 21699 |
| policy_loss | 1.83e+08 |
| std | 0.923 |
| value_loss | 2.48e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 328 |
| iterations | 9800 |
| time_elapsed | 149 |
| total_timesteps | 49000 |
| train/ | |
| entropy_loss | -40.1 |
| explained_variance | 1.79e-07 |
| learning_rate | 0.0002 |
| n_updates | 21799 |
| policy_loss | 3.18e+08 |
| std | 0.922 |
| value_loss | 6.2e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 328 |
| iterations | 9900 |
| time_elapsed | 150 |
| total_timesteps | 49500 |
| train/ | |
| entropy_loss | -40.1 |
| explained_variance | 0 |
| learning_rate | 0.0002 |
| n_updates | 21899 |
| policy_loss | 3.49e+08 |
| std | 0.922 |
| value_loss | 8.39e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 328 |
| iterations | 10000 |
| time_elapsed | 152 |
| total_timesteps | 50000 |
| train/ | |
| entropy_loss | -40.1 |
| explained_variance | 1.19e-07 |
| learning_rate | 0.0002 |
| n_updates | 21999 |
| policy_loss | 4.71e+08 |
| std | 0.921 |
| value_loss | 1.69e+14 |
------------------------------------
###Markdown
Model 2: **PPO**
###Code
agent = DRLAgent(env = env_train)
PPO_PARAMS = {
"n_steps": 2048,
"ent_coef": 0.005,
"learning_rate": 0.0001,
"batch_size": 128,
}
model_ppo = agent.get_model("ppo",model_kwargs = PPO_PARAMS)
trained_ppo = agent.train_model(model=model_ppo,
tb_log_name='ppo',
total_timesteps=80000)
###Output
Logging to tensorboard_log/ppo/ppo_3
-----------------------------
| time/ | |
| fps | 458 |
| iterations | 1 |
| time_elapsed | 4 |
| total_timesteps | 2048 |
-----------------------------
=================================
begin_total_asset:1000000
end_total_asset:4917364.6278486075
Sharpe: 1.074414829116363
=================================
--------------------------------------------
| time/ | |
| fps | 391 |
| iterations | 2 |
| time_elapsed | 10 |
| total_timesteps | 4096 |
| train/ | |
| approx_kl | -7.8231096e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.71e+14 |
| learning_rate | 0.0001 |
| loss | 7.78e+14 |
| n_updates | 10 |
| policy_gradient_loss | -6.16e-07 |
| std | 1 |
| value_loss | 1.57e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4996331.100586685
Sharpe: 1.0890927964884638
=================================
--------------------------------------------
| time/ | |
| fps | 373 |
| iterations | 3 |
| time_elapsed | 16 |
| total_timesteps | 6144 |
| train/ | |
| approx_kl | -3.5390258e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -8.76e+14 |
| learning_rate | 0.0001 |
| loss | 1.1e+15 |
| n_updates | 20 |
| policy_gradient_loss | -4.29e-07 |
| std | 1 |
| value_loss | 2.33e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4751039.2878817525
Sharpe: 1.0560179406423764
=================================
--------------------------------------------
| time/ | |
| fps | 365 |
| iterations | 4 |
| time_elapsed | 22 |
| total_timesteps | 8192 |
| train/ | |
| approx_kl | -1.6763806e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -8.01e+15 |
| learning_rate | 0.0001 |
| loss | 1.25e+15 |
| n_updates | 30 |
| policy_gradient_loss | -5.58e-07 |
| std | 1 |
| value_loss | 2.59e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4769059.347696523
Sharpe: 1.056814654380227
=================================
--------------------------------------------
| time/ | |
| fps | 360 |
| iterations | 5 |
| time_elapsed | 28 |
| total_timesteps | 10240 |
| train/ | |
| approx_kl | -5.5879354e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -2.55e+16 |
| learning_rate | 0.0001 |
| loss | 1.24e+15 |
| n_updates | 40 |
| policy_gradient_loss | -4.9e-07 |
| std | 1 |
| value_loss | 2.7e+15 |
--------------------------------------------
--------------------------------------------
| time/ | |
| fps | 358 |
| iterations | 6 |
| time_elapsed | 34 |
| total_timesteps | 12288 |
| train/ | |
| approx_kl | 1.13621354e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -9.17e+16 |
| learning_rate | 0.0001 |
| loss | 1.35e+15 |
| n_updates | 50 |
| policy_gradient_loss | -4.28e-07 |
| std | 1 |
| value_loss | 2.77e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4816491.86007194
Sharpe: 1.0636199939613733
=================================
-------------------------------------------
| time/ | |
| fps | 356 |
| iterations | 7 |
| time_elapsed | 40 |
| total_timesteps | 14336 |
| train/ | |
| approx_kl | 3.5390258e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.42e+17 |
| learning_rate | 0.0001 |
| loss | 1.03e+15 |
| n_updates | 60 |
| policy_gradient_loss | -6.52e-07 |
| std | 1 |
| value_loss | 1.94e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4631919.83090099
Sharpe: 1.0396504731290799
=================================
-------------------------------------------
| time/ | |
| fps | 354 |
| iterations | 8 |
| time_elapsed | 46 |
| total_timesteps | 16384 |
| train/ | |
| approx_kl | 1.7508864e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.93e+17 |
| learning_rate | 0.0001 |
| loss | 9.83e+14 |
| n_updates | 70 |
| policy_gradient_loss | -5.78e-07 |
| std | 1 |
| value_loss | 2.06e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4728763.286321457
Sharpe: 1.052390302374202
=================================
-------------------------------------------
| time/ | |
| fps | 353 |
| iterations | 9 |
| time_elapsed | 52 |
| total_timesteps | 18432 |
| train/ | |
| approx_kl | 4.4703484e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.72e+18 |
| learning_rate | 0.0001 |
| loss | 1.25e+15 |
| n_updates | 80 |
| policy_gradient_loss | -4.84e-07 |
| std | 1 |
| value_loss | 2.33e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4439983.024798136
Sharpe: 1.013829383303325
=================================
--------------------------------------------
| time/ | |
| fps | 352 |
| iterations | 10 |
| time_elapsed | 58 |
| total_timesteps | 20480 |
| train/ | |
| approx_kl | -1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.7e+18 |
| learning_rate | 0.0001 |
| loss | 1.17e+15 |
| n_updates | 90 |
| policy_gradient_loss | -4.82e-07 |
| std | 1 |
| value_loss | 2.58e+15 |
--------------------------------------------
-------------------------------------------
| time/ | |
| fps | 352 |
| iterations | 11 |
| time_elapsed | 63 |
| total_timesteps | 22528 |
| train/ | |
| approx_kl | -9.313226e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.85e+18 |
| learning_rate | 0.0001 |
| loss | 1.2e+15 |
| n_updates | 100 |
| policy_gradient_loss | -5.2e-07 |
| std | 1 |
| value_loss | 2.51e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5048884.524536961
Sharpe: 1.0963911876706685
=================================
-------------------------------------------
| time/ | |
| fps | 351 |
| iterations | 12 |
| time_elapsed | 69 |
| total_timesteps | 24576 |
| train/ | |
| approx_kl | 3.7252903e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -2.67e+18 |
| learning_rate | 0.0001 |
| loss | 1.44e+15 |
| n_updates | 110 |
| policy_gradient_loss | -4.53e-07 |
| std | 1 |
| value_loss | 2.8e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4824229.456193555
Sharpe: 1.0648549464252506
=================================
-------------------------------------------
| time/ | |
| fps | 351 |
| iterations | 13 |
| time_elapsed | 75 |
| total_timesteps | 26624 |
| train/ | |
| approx_kl | 3.3527613e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.38e+18 |
| learning_rate | 0.0001 |
| loss | 7.89e+14 |
| n_updates | 120 |
| policy_gradient_loss | -6.06e-07 |
| std | 1 |
| value_loss | 1.76e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4602974.615591427
Sharpe: 1.034753433280377
=================================
-------------------------------------------
| time/ | |
| fps | 350 |
| iterations | 14 |
| time_elapsed | 81 |
| total_timesteps | 28672 |
| train/ | |
| approx_kl | 8.8475645e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.75e+19 |
| learning_rate | 0.0001 |
| loss | 1.23e+15 |
| n_updates | 130 |
| policy_gradient_loss | -5.8e-07 |
| std | 1 |
| value_loss | 2.27e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4608422.583401322
Sharpe: 1.035300880612428
=================================
-------------------------------------------
| time/ | |
| fps | 349 |
| iterations | 15 |
| time_elapsed | 87 |
| total_timesteps | 30720 |
| train/ | |
| approx_kl | 1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.71e+18 |
| learning_rate | 0.0001 |
| loss | 1.22e+15 |
| n_updates | 140 |
| policy_gradient_loss | -5.63e-07 |
| std | 1 |
| value_loss | 2.39e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4826869.636472441
Sharpe: 1.0676330284861433
=================================
--------------------------------------------
| time/ | |
| fps | 348 |
| iterations | 16 |
| time_elapsed | 94 |
| total_timesteps | 32768 |
| train/ | |
| approx_kl | -1.4901161e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.51e+19 |
| learning_rate | 0.0001 |
| loss | 1.22e+15 |
| n_updates | 150 |
| policy_gradient_loss | -5.78e-07 |
| std | 1 |
| value_loss | 2.7e+15 |
--------------------------------------------
-------------------------------------------
| time/ | |
| fps | 346 |
| iterations | 17 |
| time_elapsed | 100 |
| total_timesteps | 34816 |
| train/ | |
| approx_kl | -5.401671e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.48e+19 |
| learning_rate | 0.0001 |
| loss | 1.48e+15 |
| n_updates | 160 |
| policy_gradient_loss | -3.96e-07 |
| std | 1 |
| value_loss | 2.81e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4364006.929301854
Sharpe: 1.002176631256902
=================================
--------------------------------------------
| time/ | |
| fps | 345 |
| iterations | 18 |
| time_elapsed | 106 |
| total_timesteps | 36864 |
| train/ | |
| approx_kl | -1.0803342e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.15e+19 |
| learning_rate | 0.0001 |
| loss | 8.41e+14 |
| n_updates | 170 |
| policy_gradient_loss | -4.91e-07 |
| std | 1 |
| value_loss | 1.58e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4796634.5596691
Sharpe: 1.0678319491053092
=================================
--------------------------------------------
| time/ | |
| fps | 344 |
| iterations | 19 |
| time_elapsed | 112 |
| total_timesteps | 38912 |
| train/ | |
| approx_kl | -1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -4.21e+19 |
| learning_rate | 0.0001 |
| loss | 1.03e+15 |
| n_updates | 180 |
| policy_gradient_loss | -5.6e-07 |
| std | 1 |
| value_loss | 2.02e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4969786.413399254
Sharpe: 1.0823021486710163
=================================
--------------------------------------------
| time/ | |
| fps | 344 |
| iterations | 20 |
| time_elapsed | 118 |
| total_timesteps | 40960 |
| train/ | |
| approx_kl | -6.7055225e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.41e+19 |
| learning_rate | 0.0001 |
| loss | 1.22e+15 |
| n_updates | 190 |
| policy_gradient_loss | -2.87e-07 |
| std | 1 |
| value_loss | 2.4e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4885480.801922398
Sharpe: 1.0729451877791811
=================================
--------------------------------------------
| time/ | |
| fps | 343 |
| iterations | 21 |
| time_elapsed | 125 |
| total_timesteps | 43008 |
| train/ | |
| approx_kl | -5.5879354e-09 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.85e+19 |
| learning_rate | 0.0001 |
| loss | 1.62e+15 |
| n_updates | 200 |
| policy_gradient_loss | -5.24e-07 |
| std | 1 |
| value_loss | 2.95e+15 |
--------------------------------------------
-------------------------------------------
| time/ | |
| fps | 343 |
| iterations | 22 |
| time_elapsed | 131 |
| total_timesteps | 45056 |
| train/ | |
| approx_kl | 1.8067658e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.01e+19 |
| learning_rate | 0.0001 |
| loss | 1.34e+15 |
| n_updates | 210 |
| policy_gradient_loss | -4.62e-07 |
| std | 1 |
| value_loss | 2.93e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5613709.009268909
Sharpe: 1.1673870008513114
=================================
--------------------------------------------
| time/ | |
| fps | 342 |
| iterations | 23 |
| time_elapsed | 137 |
| total_timesteps | 47104 |
| train/ | |
| approx_kl | -2.0489097e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.72e+19 |
| learning_rate | 0.0001 |
| loss | 1.41e+15 |
| n_updates | 220 |
| policy_gradient_loss | -4.78e-07 |
| std | 1 |
| value_loss | 2.71e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5043800.590470289
Sharpe: 1.0953673306850924
=================================
-------------------------------------------
| time/ | |
| fps | 342 |
| iterations | 24 |
| time_elapsed | 143 |
| total_timesteps | 49152 |
| train/ | |
| approx_kl | 2.4214387e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.37e+20 |
| learning_rate | 0.0001 |
| loss | 1.01e+15 |
| n_updates | 230 |
| policy_gradient_loss | -5.28e-07 |
| std | 1 |
| value_loss | 2.26e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4776576.852863929
Sharpe: 1.0593811754233755
=================================
-------------------------------------------
| time/ | |
| fps | 342 |
| iterations | 25 |
| time_elapsed | 149 |
| total_timesteps | 51200 |
| train/ | |
| approx_kl | 4.4703484e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.27e+20 |
| learning_rate | 0.0001 |
| loss | 1.21e+15 |
| n_updates | 240 |
| policy_gradient_loss | -4.82e-07 |
| std | 1 |
| value_loss | 2.46e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4468393.200157898
Sharpe: 1.0192746589767419
=================================
-------------------------------------------
| time/ | |
| fps | 341 |
| iterations | 26 |
| time_elapsed | 156 |
| total_timesteps | 53248 |
| train/ | |
| approx_kl | 2.6077032e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.96e+20 |
| learning_rate | 0.0001 |
| loss | 1.31e+15 |
| n_updates | 250 |
| policy_gradient_loss | -5.36e-07 |
| std | 1 |
| value_loss | 2.59e+15 |
-------------------------------------------
--------------------------------------------
| time/ | |
| fps | 341 |
| iterations | 27 |
| time_elapsed | 162 |
| total_timesteps | 55296 |
| train/ | |
| approx_kl | -1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.68e+20 |
| learning_rate | 0.0001 |
| loss | 1.33e+15 |
| n_updates | 260 |
| policy_gradient_loss | -3.77e-07 |
| std | 1 |
| value_loss | 2.51e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4875234.39450474
Sharpe: 1.0721137742534572
=================================
--------------------------------------------
| time/ | |
| fps | 340 |
| iterations | 28 |
| time_elapsed | 168 |
| total_timesteps | 57344 |
| train/ | |
| approx_kl | -1.2479722e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.66e+20 |
| learning_rate | 0.0001 |
| loss | 1.59e+15 |
| n_updates | 270 |
| policy_gradient_loss | -4.61e-07 |
| std | 1 |
| value_loss | 2.8e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4600459.210918712
Sharpe: 1.034756153745345
=================================
-------------------------------------------
| time/ | |
| fps | 340 |
| iterations | 29 |
| time_elapsed | 174 |
| total_timesteps | 59392 |
| train/ | |
| approx_kl | -4.284084e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.26e+20 |
| learning_rate | 0.0001 |
| loss | 8.07e+14 |
| n_updates | 280 |
| policy_gradient_loss | -5.44e-07 |
| std | 1 |
| value_loss | 1.62e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4526188.381438201
Sharpe: 1.0293846869900876
=================================
--------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 30 |
| time_elapsed | 180 |
| total_timesteps | 61440 |
| train/ | |
| approx_kl | -2.4214387e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.44e+20 |
| learning_rate | 0.0001 |
| loss | 1.12e+15 |
| n_updates | 290 |
| policy_gradient_loss | -5.65e-07 |
| std | 1 |
| value_loss | 2.1e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4487836.803716703
Sharpe: 1.010974660894394
=================================
--------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 31 |
| time_elapsed | 187 |
| total_timesteps | 63488 |
| train/ | |
| approx_kl | -2.6077032e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -4.47e+20 |
| learning_rate | 0.0001 |
| loss | 1.14e+15 |
| n_updates | 300 |
| policy_gradient_loss | -4.8e-07 |
| std | 1 |
| value_loss | 2.25e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4480729.650671386
Sharpe: 1.0219085518652522
=================================
--------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 32 |
| time_elapsed | 193 |
| total_timesteps | 65536 |
| train/ | |
| approx_kl | -2.0302832e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.87e+20 |
| learning_rate | 0.0001 |
| loss | 1.28e+15 |
| n_updates | 310 |
| policy_gradient_loss | -4.4e-07 |
| std | 1 |
| value_loss | 2.51e+15 |
--------------------------------------------
------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 33 |
| time_elapsed | 199 |
| total_timesteps | 67584 |
| train/ | |
| approx_kl | 1.359731e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.68e+20 |
| learning_rate | 0.0001 |
| loss | 1.24e+15 |
| n_updates | 320 |
| policy_gradient_loss | -4.51e-07 |
| std | 1 |
| value_loss | 2.66e+15 |
------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4399373.734699048
Sharpe: 1.005407087483561
=================================
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 34 |
| time_elapsed | 205 |
| total_timesteps | 69632 |
| train/ | |
| approx_kl | 2.2351742e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -2.29e+20 |
| learning_rate | 0.0001 |
| loss | 8.5e+14 |
| n_updates | 330 |
| policy_gradient_loss | -5.56e-07 |
| std | 1 |
| value_loss | 1.64e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4305742.921261859
Sharpe: 0.9945061913961891
=================================
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 35 |
| time_elapsed | 211 |
| total_timesteps | 71680 |
| train/ | |
| approx_kl | 1.3411045e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.11e+20 |
| learning_rate | 0.0001 |
| loss | 7.97e+14 |
| n_updates | 340 |
| policy_gradient_loss | -6.48e-07 |
| std | 1 |
| value_loss | 1.8e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4794175.629957249
Sharpe: 1.0611635246548963
=================================
--------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 36 |
| time_elapsed | 217 |
| total_timesteps | 73728 |
| train/ | |
| approx_kl | -3.3527613e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.16e+21 |
| learning_rate | 0.0001 |
| loss | 1.07e+15 |
| n_updates | 350 |
| policy_gradient_loss | -4.82e-07 |
| std | 1 |
| value_loss | 2.06e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4467487.416264421
Sharpe: 1.021012208464475
=================================
------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 37 |
| time_elapsed | 224 |
| total_timesteps | 75776 |
| train/ | |
| approx_kl | 5.401671e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -9.89e+20 |
| learning_rate | 0.0001 |
| loss | 1.46e+15 |
| n_updates | 360 |
| policy_gradient_loss | -4.78e-07 |
| std | 1 |
| value_loss | 2.75e+15 |
------------------------------------------
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 38 |
| time_elapsed | 229 |
| total_timesteps | 77824 |
| train/ | |
| approx_kl | 1.6763806e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.64e+20 |
| learning_rate | 0.0001 |
| loss | 1.25e+15 |
| n_updates | 370 |
| policy_gradient_loss | -4.54e-07 |
| std | 1 |
| value_loss | 2.57e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4806649.219027834
Sharpe: 1.0604486398186765
=================================
------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 39 |
| time_elapsed | 236 |
| total_timesteps | 79872 |
| train/ | |
| approx_kl | 4.284084e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.96e+20 |
| learning_rate | 0.0001 |
| loss | 1.28e+15 |
| n_updates | 380 |
| policy_gradient_loss | -5.9e-07 |
| std | 1 |
| value_loss | 2.44e+15 |
------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4653147.508966551
Sharpe: 1.043189911078732
=================================
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 40 |
| time_elapsed | 242 |
| total_timesteps | 81920 |
| train/ | |
| approx_kl | 6.3329935e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.04e+21 |
| learning_rate | 0.0001 |
| loss | 1.01e+15 |
| n_updates | 390 |
| policy_gradient_loss | -5.33e-07 |
| std | 1 |
| value_loss | 1.82e+15 |
-------------------------------------------
###Markdown
Model 3: **DDPG**
###Code
agent = DRLAgent(env = env_train)
DDPG_PARAMS = {"batch_size": 128, "buffer_size": 50000, "learning_rate": 0.001}
model_ddpg = agent.get_model("ddpg",model_kwargs = DDPG_PARAMS)
trained_ddpg = agent.train_model(model=model_ddpg,
tb_log_name='ddpg',
total_timesteps=50000)
###Output
Logging to tensorboard_log/ddpg/ddpg_2
=================================
begin_total_asset:1000000
end_total_asset:4625995.900359718
Sharpe: 1.040202670783119
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 4 |
| fps | 22 |
| time_elapsed | 439 |
| total timesteps | 10064 |
| train/ | |
| actor_loss | -6.99e+07 |
| critic_loss | 7.27e+12 |
| learning_rate | 0.001 |
| n_updates | 7548 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 8 |
| fps | 20 |
| time_elapsed | 980 |
| total timesteps | 20128 |
| train/ | |
| actor_loss | -1.44e+08 |
| critic_loss | 1.81e+13 |
| learning_rate | 0.001 |
| n_updates | 17612 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 12 |
| fps | 19 |
| time_elapsed | 1542 |
| total timesteps | 30192 |
| train/ | |
| actor_loss | -1.88e+08 |
| critic_loss | 2.72e+13 |
| learning_rate | 0.001 |
| n_updates | 27676 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 16 |
| fps | 18 |
| time_elapsed | 2133 |
| total timesteps | 40256 |
| train/ | |
| actor_loss | -2.15e+08 |
| critic_loss | 3.45e+13 |
| learning_rate | 0.001 |
| n_updates | 37740 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
---------------------------------
| time/ | |
| episodes | 20 |
| fps | 17 |
| time_elapsed | 2874 |
| total timesteps | 50320 |
| train/ | |
| actor_loss | -2.3e+08 |
| critic_loss | 4.05e+13 |
| learning_rate | 0.001 |
| n_updates | 47804 |
---------------------------------
###Markdown
Model 4: **SAC**
###Code
agent = DRLAgent(env = env_train)
SAC_PARAMS = {
"batch_size": 128,
"buffer_size": 100000,
"learning_rate": 0.0003,
"learning_starts": 100,
"ent_coef": "auto_0.1",
}
model_sac = agent.get_model("sac",model_kwargs = SAC_PARAMS)
trained_sac = agent.train_model(model=model_sac,
tb_log_name='sac',
total_timesteps=50000)
###Output
Logging to tensorboard_log/sac/sac_1
=================================
begin_total_asset:1000000
end_total_asset:4449463.498168942
Sharpe: 1.01245667390232
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418643.239765096
Sharpe: 1.0135796594260282
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418644.1960784905
Sharpe: 1.0135797537524718
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418659.429680678
Sharpe: 1.013581852537709
=================================
----------------------------------
| time/ | |
| episodes | 4 |
| fps | 12 |
| time_elapsed | 783 |
| total timesteps | 10064 |
| train/ | |
| actor_loss | -8.83e+07 |
| critic_loss | 6.57e+12 |
| ent_coef | 2.24 |
| ent_coef_loss | -205 |
| learning_rate | 0.0003 |
| n_updates | 9963 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4418651.576406099
Sharpe: 1.013581224026754
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418670.948269031
Sharpe: 1.0135838030234754
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418682.278829884
Sharpe: 1.013585596968056
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418791.911955293
Sharpe: 1.0136007328171013
=================================
----------------------------------
| time/ | |
| episodes | 8 |
| fps | 12 |
| time_elapsed | 1585 |
| total timesteps | 20128 |
| train/ | |
| actor_loss | -1.51e+08 |
| critic_loss | 1.12e+13 |
| ent_coef | 41.7 |
| ent_coef_loss | -670 |
| learning_rate | 0.0003 |
| n_updates | 20027 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4418737.365107464
Sharpe: 1.0135970410224868
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418754.895735274
Sharpe: 1.0135965589029627
=================================
=================================
begin_total_asset:1000000
end_total_asset:4419325.814567342
Sharpe: 1.0136807224228588
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418142.473513333
Sharpe: 1.0135234795926031
=================================
----------------------------------
| time/ | |
| episodes | 12 |
| fps | 12 |
| time_elapsed | 2400 |
| total timesteps | 30192 |
| train/ | |
| actor_loss | -1.85e+08 |
| critic_loss | 1.87e+13 |
| ent_coef | 725 |
| ent_coef_loss | -673 |
| learning_rate | 0.0003 |
| n_updates | 30091 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4422046.188863339
Sharpe: 1.0140936726052256
=================================
=================================
begin_total_asset:1000000
end_total_asset:4424919.463828854
Sharpe: 1.014521127041106
=================================
=================================
begin_total_asset:1000000
end_total_asset:4427483.152494239
Sharpe: 1.0148626804754584
=================================
=================================
begin_total_asset:1000000
end_total_asset:4460697.650185859
Sharpe: 1.019852362102548
=================================
----------------------------------
| time/ | |
| episodes | 16 |
| fps | 12 |
| time_elapsed | 3210 |
| total timesteps | 40256 |
| train/ | |
| actor_loss | -1.93e+08 |
| critic_loss | 1.62e+13 |
| ent_coef | 1.01e+04 |
| ent_coef_loss | -238 |
| learning_rate | 0.0003 |
| n_updates | 40155 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4434035.982803257
Sharpe: 1.0161512551319891
=================================
=================================
begin_total_asset:1000000
end_total_asset:4454728.906041551
Sharpe: 1.018484863448905
=================================
=================================
begin_total_asset:1000000
end_total_asset:4475667.120269234
Sharpe: 1.0215545521682856
=================================
###Markdown
Model 5: **TD3**
###Code
agent = DRLAgent(env = env_train)
TD3_PARAMS = {"batch_size": 100,
"buffer_size": 1000000,
"learning_rate": 0.001}
model_td3 = agent.get_model("td3",model_kwargs = TD3_PARAMS)
trained_td3 = agent.train_model(model=model_td3,
tb_log_name='td3',
total_timesteps=30000)
###Output
Logging to tensorboard_log/td3/td3_1
=================================
begin_total_asset:1000000
end_total_asset:5232441.848437611
Sharpe: 0.8749907118878204
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
----------------------------------
| time/ | |
| episodes | 4 |
| fps | 25 |
| time_elapsed | 445 |
| total timesteps | 11572 |
| train/ | |
| actor_loss | -4.69e+07 |
| critic_loss | 1.08e+13 |
| learning_rate | 0.001 |
| n_updates | 8679 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
----------------------------------
| time/ | |
| episodes | 8 |
| fps | 23 |
| time_elapsed | 985 |
| total timesteps | 23144 |
| train/ | |
| actor_loss | -1.05e+08 |
| critic_loss | 2.77e+13 |
| learning_rate | 0.001 |
| n_updates | 20251 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
=================================
begin_total_asset:1000000
end_total_asset:5140658.98428856
Sharpe: 0.8628057073557059
=================================
###Markdown
TradingAssume that we have $1,000,000 initial capital at 2019-01-01. We use the DDPG model to trade Dow jones 30 stocks.
###Code
trade = data_split(df,'2020-07-01', '2021-07-01')
e_trade_gym = StockPortfolioEnv(df = trade, **env_kwargs)
trade.shape
df_daily_return, df_actions = DRLAgent.DRL_prediction(model=trained_td3,
environment = e_trade_gym)
df_daily_return.head()
df_daily_return.to_csv('df_daily_return.csv')
df_actions.head()
df_actions.to_csv('df_actions.csv')
###Output
_____no_output_____
###Markdown
Part 7: Backtest Our StrategyBacktesting plays a key role in evaluating the performance of a trading strategy. Automated backtesting tool is preferred because it reduces the human error. We usually use the Quantopian pyfolio package to backtest our trading strategies. It is easy to use and consists of various individual plots that provide a comprehensive image of the performance of a trading strategy. 7.1 BackTestStatspass in df_account_value, this information is stored in env class
###Code
from pyfolio import timeseries
DRL_strat = convert_daily_return_to_pyfolio_ts(df_daily_return)
perf_func = timeseries.perf_stats
perf_stats_all = perf_func( returns=DRL_strat,
factor_returns=DRL_strat,
positions=None, transactions=None, turnover_denom="AGB")
print("==============DRL Strategy Stats===========")
perf_stats_all
#baseline stats
print("==============Get Baseline Stats===========")
baseline_df = get_baseline(
ticker="^DJI",
start = df_daily_return.loc[0,'date'],
end = df_daily_return.loc[len(df_daily_return)-1,'date'])
stats = backtest_stats(baseline_df, value_col_name = 'close')
###Output
==============Get Baseline Stats===========
[*********************100%***********************] 1 of 1 completed
Shape of DataFrame: (251, 8)
Annual return 0.334042
Cumulative returns 0.332517
Annual volatility 0.146033
Sharpe ratio 2.055458
Calmar ratio 3.740347
Stability 0.945402
Max drawdown -0.089308
Omega ratio 1.408111
Sortino ratio 3.075978
Skew NaN
Kurtosis NaN
Tail ratio 1.078766
Daily value at risk -0.017207
dtype: float64
###Markdown
7.2 BackTestPlot
###Code
import pyfolio
%matplotlib inline
baseline_df = get_baseline(
ticker='^DJI', start=df_daily_return.loc[0,'date'], end='2021-07-01'
)
baseline_returns = get_daily_return(baseline_df, value_col_name="close")
with pyfolio.plotting.plotting_context(font_scale=1.1):
pyfolio.create_full_tear_sheet(returns = DRL_strat,
benchmark_rets=baseline_returns, set_context=False)
###Output
_____no_output_____
###Markdown
Min-Variance Portfolio Allocation
###Code
!pip install PyPortfolioOpt
from pypfopt.efficient_frontier import EfficientFrontier
from pypfopt import risk_models
unique_tic = trade.tic.unique()
unique_trade_date = trade.date.unique()
df.head()
#calculate_portfolio_minimum_variance
portfolio = pd.DataFrame(index = range(1), columns = unique_trade_date)
initial_capital = 1000000
portfolio.loc[0,unique_trade_date[0]] = initial_capital
for i in range(len( unique_trade_date)-1):
df_temp = df[df.date==unique_trade_date[i]].reset_index(drop=True)
df_temp_next = df[df.date==unique_trade_date[i+1]].reset_index(drop=True)
#Sigma = risk_models.sample_cov(df_temp.return_list[0])
#calculate covariance matrix
Sigma = df_temp.return_list[0].cov()
#portfolio allocation
ef_min_var = EfficientFrontier(None, Sigma,weight_bounds=(0, 0.1))
#minimum variance
raw_weights_min_var = ef_min_var.min_volatility()
#get weights
cleaned_weights_min_var = ef_min_var.clean_weights()
#current capital
cap = portfolio.iloc[0, i]
#current cash invested for each stock
current_cash = [element * cap for element in list(cleaned_weights_min_var.values())]
# current held shares
current_shares = list(np.array(current_cash)
/ np.array(df_temp.close))
# next time period price
next_price = np.array(df_temp_next.close)
##next_price * current share to calculate next total account value
portfolio.iloc[0, i+1] = np.dot(current_shares, next_price)
portfolio=portfolio.T
portfolio.columns = ['account_value']
portfolio.head()
time_ind = pd.Series(df_daily_return.date)
td3_cumpod =(df_daily_return.daily_return+1).cumprod()-1
min_var_cumpod =(portfolio.account_value.pct_change()+1).cumprod()-1
dji_cumpod =(baseline_returns+1).cumprod()-1
###Output
_____no_output_____
###Markdown
Plotly: DRL, Min-Variance, DJIA
###Code
from datetime import datetime as dt
import matplotlib.pyplot as plt
import plotly
import plotly.graph_objs as go
trace0_portfolio = go.Scatter(x = time_ind, y = td3_cumpod, mode = 'lines', name = 'TD3 (Portfolio Allocation)')
trace1_portfolio = go.Scatter(x = time_ind, y = dji_cumpod, mode = 'lines', name = 'DJIA')
trace2_portfolio = go.Scatter(x = time_ind, y = min_var_cumpod, mode = 'lines', name = 'Min-Variance')
#trace3_portfolio = go.Scatter(x = time_ind, y = ddpg_cumpod, mode = 'lines', name = 'DDPG')
#trace4_portfolio = go.Scatter(x = time_ind, y = addpg_cumpod, mode = 'lines', name = 'Adaptive-DDPG')
#trace5_portfolio = go.Scatter(x = time_ind, y = min_cumpod, mode = 'lines', name = 'Min-Variance')
#trace4 = go.Scatter(x = time_ind, y = addpg_cumpod, mode = 'lines', name = 'Adaptive-DDPG')
#trace2 = go.Scatter(x = time_ind, y = portfolio_cost_minv, mode = 'lines', name = 'Min-Variance')
#trace3 = go.Scatter(x = time_ind, y = spx_value, mode = 'lines', name = 'SPX')
fig = go.Figure()
fig.add_trace(trace0_portfolio)
fig.add_trace(trace1_portfolio)
fig.add_trace(trace2_portfolio)
fig.update_layout(
legend=dict(
x=0,
y=1,
traceorder="normal",
font=dict(
family="sans-serif",
size=15,
color="black"
),
bgcolor="White",
bordercolor="white",
borderwidth=2
),
)
#fig.update_layout(legend_orientation="h")
fig.update_layout(title={
#'text': "Cumulative Return using FinRL",
'y':0.85,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'})
#with Transaction cost
#fig.update_layout(title = 'Quarterly Trade Date')
fig.update_layout(
# margin=dict(l=20, r=20, t=20, b=20),
paper_bgcolor='rgba(1,1,0,0)',
plot_bgcolor='rgba(1, 1, 0, 0)',
#xaxis_title="Date",
yaxis_title="Cumulative Return",
xaxis={'type': 'date',
'tick0': time_ind[0],
'tickmode': 'linear',
'dtick': 86400000.0 *80}
)
fig.update_xaxes(showline=True,linecolor='black',showgrid=True, gridwidth=1, gridcolor='LightSteelBlue',mirror=True)
fig.update_yaxes(showline=True,linecolor='black',showgrid=True, gridwidth=1, gridcolor='LightSteelBlue',mirror=True)
fig.update_yaxes(zeroline=True, zerolinewidth=1, zerolinecolor='LightSteelBlue')
fig.show()
###Output
_____no_output_____
###Markdown
Deep Reinforcement Learning for Stock Trading from Scratch: Portfolio AllocationTutorials to use OpenAI DRL to perform portfolio allocation in one Jupyter Notebook | Presented at NeurIPS 2020: Deep RL Workshop* This blog is based on our paper: FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance, presented at NeurIPS 2020: Deep RL Workshop.* Check out medium blog for detailed explanations: * Please report any issues to our Github: https://github.com/AI4Finance-LLC/FinRL-Library/issues* **Pytorch Version** Content * [1. Problem Definition](0)* [2. Getting Started - Load Python packages](1) * [2.1. Install Packages](1.1) * [2.2. Check Additional Packages](1.2) * [2.3. Import Packages](1.3) * [2.4. Create Folders](1.4)* [3. Download Data](2)* [4. Preprocess Data](3) * [4.1. Technical Indicators](3.1) * [4.2. Perform Feature Engineering](3.2)* [5.Build Environment](4) * [5.1. Training & Trade Data Split](4.1) * [5.2. User-defined Environment](4.2) * [5.3. Initialize Environment](4.3) * [6.Implement DRL Algorithms](5) * [7.Backtesting Performance](6) * [7.1. BackTestStats](6.1) * [7.2. BackTestPlot](6.2) * [7.3. Baseline Stats](6.3) * [7.3. Compare to Stock Market Index](6.4) Part 1. Problem Definition This problem is to design an automated trading solution for single stock trading. We model the stock trading process as a Markov Decision Process (MDP). We then formulate our trading goal as a maximization problem.The algorithm is trained using Deep Reinforcement Learning (DRL) algorithms and the components of the reinforcement learning environment are:* Action: The action space describes the allowed actions that the agent interacts with theenvironment. Normally, a ∈ A includes three actions: a ∈ {−1, 0, 1}, where −1, 0, 1 representselling, holding, and buying one stock. Also, an action can be carried upon multiple shares. We usean action space {−k, ..., −1, 0, 1, ..., k}, where k denotes the number of shares. For example, "Buy10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or −10, respectively* Reward function: r(s, a, s′) is the incentive mechanism for an agent to learn a better action. The change of the portfolio value when action a is taken at state s and arriving at new state s', i.e., r(s, a, s′) = v′ − v, where v′ and v represent the portfoliovalues at state s′ and s, respectively* State: The state space describes the observations that the agent receives from the environment. Just as a human trader needs to analyze various information before executing a trade, soour trading agent observes many different features to better learn in an interactive environment.* Environment: Dow 30 consituentsThe data of the single stock that we will be using for this case study is obtained from Yahoo Finance API. The data contains Open-High-Low-Close price and volume. Part 2. Getting Started- Load Python Packages 2.1. Install all the packages through FinRL library
###Code
## install finrl library
!pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
###Output
Collecting git+https://github.com/AI4Finance-LLC/FinRL-Library.git
Cloning https://github.com/AI4Finance-LLC/FinRL-Library.git to /tmp/pip-req-build-q5i8wlg8
Running command git clone -q https://github.com/AI4Finance-LLC/FinRL-Library.git /tmp/pip-req-build-q5i8wlg8
Collecting pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2
Cloning https://github.com/quantopian/pyfolio.git to /tmp/pip-install-iklozlwf/pyfolio_8412840d4dbc46dbba3ea56f3f97f75c
Running command git clone -q https://github.com/quantopian/pyfolio.git /tmp/pip-install-iklozlwf/pyfolio_8412840d4dbc46dbba3ea56f3f97f75c
Requirement already satisfied: numpy>=1.17.3 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.1) (1.19.5)
Requirement already satisfied: pandas>=1.1.5 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.1) (1.1.5)
Collecting stockstats
Downloading stockstats-0.3.2-py2.py3-none-any.whl (13 kB)
Collecting yfinance
Downloading yfinance-0.1.63.tar.gz (26 kB)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.1) (3.2.2)
Requirement already satisfied: scikit-learn>=0.21.0 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.1) (0.22.2.post1)
Requirement already satisfied: gym>=0.17 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.1) (0.17.3)
Collecting stable-baselines3[extra]
Downloading stable_baselines3-1.1.0-py3-none-any.whl (172 kB)
[K |████████████████████████████████| 172 kB 7.0 MB/s
[?25hCollecting ray[default]
Downloading ray-1.6.0-cp37-cp37m-manylinux2014_x86_64.whl (49.6 MB)
[K |████████████████████████████████| 49.6 MB 6.2 kB/s
[?25hCollecting lz4
Downloading lz4-3.1.3-cp37-cp37m-manylinux2010_x86_64.whl (1.8 MB)
[K |████████████████████████████████| 1.8 MB 14.2 MB/s
[?25hCollecting tensorboardX
Downloading tensorboardX-2.4-py2.py3-none-any.whl (124 kB)
[K |████████████████████████████████| 124 kB 57.6 MB/s
[?25hCollecting gputil
Downloading GPUtil-1.4.0.tar.gz (5.5 kB)
Collecting trading_calendars
Downloading trading_calendars-2.1.1.tar.gz (108 kB)
[K |████████████████████████████████| 108 kB 54.7 MB/s
[?25hCollecting alpaca_trade_api
Downloading alpaca_trade_api-1.2.3-py3-none-any.whl (40 kB)
[K |████████████████████████████████| 40 kB 4.2 MB/s
[?25hCollecting ccxt
Downloading ccxt-1.55.84-py2.py3-none-any.whl (2.0 MB)
[K |████████████████████████████████| 2.0 MB 33.4 MB/s
[?25hCollecting jqdatasdk
Downloading jqdatasdk-1.8.10-py3-none-any.whl (153 kB)
[K |████████████████████████████████| 153 kB 59.2 MB/s
[?25hCollecting wrds
Downloading wrds-3.1.0-py3-none-any.whl (12 kB)
Requirement already satisfied: pytest in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.1) (3.6.4)
Requirement already satisfied: setuptools>=41.4.0 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.1) (57.4.0)
Requirement already satisfied: wheel>=0.33.6 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.1) (0.37.0)
Requirement already satisfied: ipython>=3.2.3 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (5.5.0)
Requirement already satisfied: pytz>=2014.10 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (2018.9)
Requirement already satisfied: scipy>=0.14.0 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (1.4.1)
Requirement already satisfied: seaborn>=0.7.1 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (0.11.1)
Collecting empyrical>=0.5.0
Downloading empyrical-0.5.5.tar.gz (52 kB)
[K |████████████████████████████████| 52 kB 997 kB/s
[?25hRequirement already satisfied: pandas-datareader>=0.2 in /usr/local/lib/python3.7/dist-packages (from empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (0.9.0)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym>=0.17->finrl==0.3.1) (1.5.0)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym>=0.17->finrl==0.3.1) (1.3.0)
Requirement already satisfied: pexpect in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (4.8.0)
Requirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (4.4.2)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (0.8.1)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (1.0.18)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (5.0.5)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (0.7.5)
Requirement already satisfied: pygments in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (2.6.1)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.3.1) (1.3.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.3.1) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.3.1) (2.4.7)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.3.1) (2.8.2)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from cycler>=0.10->matplotlib->finrl==0.3.1) (1.15.0)
Requirement already satisfied: requests>=2.19.0 in /usr/local/lib/python3.7/dist-packages (from pandas-datareader>=0.2->empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (2.23.0)
Requirement already satisfied: lxml in /usr/local/lib/python3.7/dist-packages (from pandas-datareader>=0.2->empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (4.2.6)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (0.2.5)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym>=0.17->finrl==0.3.1) (0.16.0)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->pandas-datareader>=0.2->empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (2021.5.30)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->pandas-datareader>=0.2->empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->pandas-datareader>=0.2->empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->pandas-datareader>=0.2->empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (1.24.3)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.0->finrl==0.3.1) (1.0.1)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.7/dist-packages (from traitlets>=4.2->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (0.2.0)
Collecting websockets<10,>=8.0
Downloading websockets-9.1-cp37-cp37m-manylinux2010_x86_64.whl (103 kB)
[K |████████████████████████████████| 103 kB 27.2 MB/s
[?25hRequirement already satisfied: msgpack==1.0.2 in /usr/local/lib/python3.7/dist-packages (from alpaca_trade_api->finrl==0.3.1) (1.0.2)
Collecting websocket-client<2,>=0.56.0
Downloading websocket_client-1.2.1-py2.py3-none-any.whl (52 kB)
[K |████████████████████████████████| 52 kB 1.1 MB/s
[?25hCollecting aiodns>=1.1.1
Downloading aiodns-3.0.0-py3-none-any.whl (5.0 kB)
Collecting yarl==1.6.3
Downloading yarl-1.6.3-cp37-cp37m-manylinux2014_x86_64.whl (294 kB)
[K |████████████████████████████████| 294 kB 74.0 MB/s
[?25hCollecting aiohttp<3.8,>=3.7.4
Downloading aiohttp-3.7.4.post0-cp37-cp37m-manylinux2014_x86_64.whl (1.3 MB)
[K |████████████████████████████████| 1.3 MB 48.1 MB/s
[?25hCollecting cryptography>=2.6.1
Downloading cryptography-3.4.8-cp36-abi3-manylinux_2_24_x86_64.whl (3.0 MB)
[K |████████████████████████████████| 3.0 MB 41.2 MB/s
[?25hRequirement already satisfied: typing-extensions>=3.7.4 in /usr/local/lib/python3.7/dist-packages (from yarl==1.6.3->ccxt->finrl==0.3.1) (3.7.4.3)
Collecting multidict>=4.0
Downloading multidict-5.1.0-cp37-cp37m-manylinux2014_x86_64.whl (142 kB)
[K |████████████████████████████████| 142 kB 60.3 MB/s
[?25hCollecting pycares>=4.0.0
Downloading pycares-4.0.0-cp37-cp37m-manylinux2010_x86_64.whl (291 kB)
[K |████████████████████████████████| 291 kB 59.1 MB/s
[?25hRequirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp<3.8,>=3.7.4->ccxt->finrl==0.3.1) (21.2.0)
Collecting async-timeout<4.0,>=3.0
Downloading async_timeout-3.0.1-py3-none-any.whl (8.2 kB)
Requirement already satisfied: cffi>=1.12 in /usr/local/lib/python3.7/dist-packages (from cryptography>=2.6.1->ccxt->finrl==0.3.1) (1.14.6)
Requirement already satisfied: pycparser in /usr/local/lib/python3.7/dist-packages (from cffi>=1.12->cryptography>=2.6.1->ccxt->finrl==0.3.1) (2.20)
Requirement already satisfied: SQLAlchemy>=1.2.8 in /usr/local/lib/python3.7/dist-packages (from jqdatasdk->finrl==0.3.1) (1.4.22)
Collecting thriftpy2>=0.3.9
Downloading thriftpy2-0.4.14.tar.gz (361 kB)
[K |████████████████████████████████| 361 kB 55.6 MB/s
[?25hCollecting pymysql>=0.7.6
Downloading PyMySQL-1.0.2-py3-none-any.whl (43 kB)
[K |████████████████████████████████| 43 kB 1.9 MB/s
[?25hRequirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from SQLAlchemy>=1.2.8->jqdatasdk->finrl==0.3.1) (4.6.4)
Requirement already satisfied: greenlet!=0.4.17 in /usr/local/lib/python3.7/dist-packages (from SQLAlchemy>=1.2.8->jqdatasdk->finrl==0.3.1) (1.1.1)
Collecting ply<4.0,>=3.4
Downloading ply-3.11-py2.py3-none-any.whl (49 kB)
[K |████████████████████████████████| 49 kB 4.5 MB/s
[?25hRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->SQLAlchemy>=1.2.8->jqdatasdk->finrl==0.3.1) (3.5.0)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.7/dist-packages (from pexpect->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (0.7.0)
Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.1) (1.4.0)
Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.1) (1.10.0)
Requirement already satisfied: pluggy<0.8,>=0.5 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.1) (0.7.1)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.1) (8.8.0)
Requirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from ray[default]->finrl==0.3.1) (3.0.12)
Collecting redis>=3.5.0
Downloading redis-3.5.3-py2.py3-none-any.whl (72 kB)
[K |████████████████████████████████| 72 kB 470 kB/s
[?25hRequirement already satisfied: click>=7.0 in /usr/local/lib/python3.7/dist-packages (from ray[default]->finrl==0.3.1) (7.1.2)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/dist-packages (from ray[default]->finrl==0.3.1) (3.13)
Requirement already satisfied: protobuf>=3.15.3 in /usr/local/lib/python3.7/dist-packages (from ray[default]->finrl==0.3.1) (3.17.3)
Requirement already satisfied: grpcio>=1.28.1 in /usr/local/lib/python3.7/dist-packages (from ray[default]->finrl==0.3.1) (1.39.0)
Collecting aiohttp-cors
Downloading aiohttp_cors-0.7.0-py3-none-any.whl (27 kB)
Requirement already satisfied: jsonschema in /usr/local/lib/python3.7/dist-packages (from ray[default]->finrl==0.3.1) (2.6.0)
Collecting colorful
Downloading colorful-0.5.4-py2.py3-none-any.whl (201 kB)
[K |████████████████████████████████| 201 kB 50.0 MB/s
[?25hCollecting aioredis<2
Downloading aioredis-1.3.1-py3-none-any.whl (65 kB)
[K |████████████████████████████████| 65 kB 3.5 MB/s
[?25hCollecting py-spy>=0.2.0
Downloading py_spy-0.3.8-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.whl (3.1 MB)
[K |████████████████████████████████| 3.1 MB 29.1 MB/s
[?25hCollecting gpustat
Downloading gpustat-0.6.0.tar.gz (78 kB)
[K |████████████████████████████████| 78 kB 5.7 MB/s
[?25hCollecting opencensus
Downloading opencensus-0.7.13-py2.py3-none-any.whl (127 kB)
[K |████████████████████████████████| 127 kB 44.2 MB/s
[?25hRequirement already satisfied: prometheus-client>=0.7.1 in /usr/local/lib/python3.7/dist-packages (from ray[default]->finrl==0.3.1) (0.11.0)
Collecting hiredis
Downloading hiredis-2.0.0-cp37-cp37m-manylinux2010_x86_64.whl (85 kB)
[K |████████████████████████████████| 85 kB 3.2 MB/s
[?25hRequirement already satisfied: nvidia-ml-py3>=7.352.0 in /usr/local/lib/python3.7/dist-packages (from gpustat->ray[default]->finrl==0.3.1) (7.352.0)
Requirement already satisfied: psutil in /usr/local/lib/python3.7/dist-packages (from gpustat->ray[default]->finrl==0.3.1) (5.4.8)
Collecting blessings>=1.6
Downloading blessings-1.7-py3-none-any.whl (18 kB)
Collecting opencensus-context==0.1.2
Downloading opencensus_context-0.1.2-py2.py3-none-any.whl (4.4 kB)
Requirement already satisfied: google-api-core<2.0.0,>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from opencensus->ray[default]->finrl==0.3.1) (1.26.3)
Requirement already satisfied: google-auth<2.0dev,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from google-api-core<2.0.0,>=1.0.0->opencensus->ray[default]->finrl==0.3.1) (1.34.0)
Requirement already satisfied: packaging>=14.3 in /usr/local/lib/python3.7/dist-packages (from google-api-core<2.0.0,>=1.0.0->opencensus->ray[default]->finrl==0.3.1) (21.0)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core<2.0.0,>=1.0.0->opencensus->ray[default]->finrl==0.3.1) (1.53.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2.0dev,>=1.21.1->google-api-core<2.0.0,>=1.0.0->opencensus->ray[default]->finrl==0.3.1) (0.2.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2.0dev,>=1.21.1->google-api-core<2.0.0,>=1.0.0->opencensus->ray[default]->finrl==0.3.1) (4.2.2)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<2.0dev,>=1.21.1->google-api-core<2.0.0,>=1.0.0->opencensus->ray[default]->finrl==0.3.1) (4.7.2)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2.0dev,>=1.21.1->google-api-core<2.0.0,>=1.0.0->opencensus->ray[default]->finrl==0.3.1) (0.4.8)
Requirement already satisfied: tabulate in /usr/local/lib/python3.7/dist-packages (from ray[default]->finrl==0.3.1) (0.8.9)
Requirement already satisfied: torch>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.1) (1.9.0+cu102)
Requirement already satisfied: opencv-python in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.1) (4.1.2.30)
Requirement already satisfied: atari-py~=0.2.0 in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.1) (0.2.9)
Requirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.1) (7.1.2)
Requirement already satisfied: tensorboard>=2.2.0 in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.1) (2.6.0)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.1) (0.6.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.1) (0.4.5)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.1) (1.0.1)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.1) (0.12.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.1) (1.8.0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.1) (3.3.4)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.1) (1.3.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.1) (3.1.1)
Collecting int-date>=0.1.7
Downloading int_date-0.1.8-py2.py3-none-any.whl (5.0 kB)
Requirement already satisfied: toolz in /usr/local/lib/python3.7/dist-packages (from trading_calendars->finrl==0.3.1) (0.11.1)
Collecting mock
Downloading mock-4.0.3-py3-none-any.whl (28 kB)
Collecting psycopg2-binary
Downloading psycopg2_binary-2.9.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.4 MB)
[K |████████████████████████████████| 3.4 MB 15.0 MB/s
[?25hRequirement already satisfied: multitasking>=0.0.7 in /usr/local/lib/python3.7/dist-packages (from yfinance->finrl==0.3.1) (0.0.9)
Collecting lxml
Downloading lxml-4.6.3-cp37-cp37m-manylinux2014_x86_64.whl (6.3 MB)
[K |████████████████████████████████| 6.3 MB 24.1 MB/s
[?25hBuilding wheels for collected packages: finrl, pyfolio, empyrical, gputil, thriftpy2, gpustat, trading-calendars, yfinance
Building wheel for finrl (setup.py) ... [?25l[?25hdone
Created wheel for finrl: filename=finrl-0.3.1-py3-none-any.whl size=2732514 sha256=d5c403cd1a2d73433fa18f96b6f1438cadf253439f91c9f5b5617c78ce1c2a3b
Stored in directory: /tmp/pip-ephem-wheel-cache-r3pz55z1/wheels/17/ff/bd/1bc602a0352762b0b24041b88536d803ae343ed0a711fcf55e
Building wheel for pyfolio (setup.py) ... [?25l[?25hdone
Created wheel for pyfolio: filename=pyfolio-0.9.2+75.g4b901f6-py3-none-any.whl size=75775 sha256=4f0a0f7e2e86f37fe4225dd4bedf4c51ca8e960932a1ac3beaedd71a471dbd31
Stored in directory: /tmp/pip-ephem-wheel-cache-r3pz55z1/wheels/ef/09/e5/2c1bf37c050d22557c080deb1be986d06424627c04aeca19b9
Building wheel for empyrical (setup.py) ... [?25l[?25hdone
Created wheel for empyrical: filename=empyrical-0.5.5-py3-none-any.whl size=39777 sha256=85f07abd5a7a81461847f4ca81c5b8883a0cdbc26a6d84812560dea2b2d19f9c
Stored in directory: /root/.cache/pip/wheels/d9/91/4b/654fcff57477efcf149eaca236da2fce991526cbab431bf312
Building wheel for gputil (setup.py) ... [?25l[?25hdone
Created wheel for gputil: filename=GPUtil-1.4.0-py3-none-any.whl size=7411 sha256=b7a395c7857c2b0ac946ef42f1ab8d1c9f75b4768ceb134a335e72a553f5d13d
Stored in directory: /root/.cache/pip/wheels/6e/f8/83/534c52482d6da64622ddbf72cd93c35d2ef2881b78fd08ff0c
Building wheel for thriftpy2 (setup.py) ... [?25l[?25hdone
Created wheel for thriftpy2: filename=thriftpy2-0.4.14-cp37-cp37m-linux_x86_64.whl size=940419 sha256=4d4537cb53aafbb7700e1b19eaa60160d03da34b1f247d788ef1b0eec1778684
Stored in directory: /root/.cache/pip/wheels/2a/f5/49/9c0d851aa64b58db72883cf9393cc824d536bdf13f5c83cff4
Building wheel for gpustat (setup.py) ... [?25l[?25hdone
Created wheel for gpustat: filename=gpustat-0.6.0-py3-none-any.whl size=12617 sha256=16454a0926f4a3c029336ea46335507b8375d77be3cc9247a2d249b76cc2440b
Stored in directory: /root/.cache/pip/wheels/e6/67/af/f1ad15974b8fd95f59a63dbf854483ebe5c7a46a93930798b8
Building wheel for trading-calendars (setup.py) ... [?25l[?25hdone
Created wheel for trading-calendars: filename=trading_calendars-2.1.1-py3-none-any.whl size=140937 sha256=c57b62f8097002d6c11b6e7b62a335da6967bafcc73e329b2b3369fae1bf01fa
Stored in directory: /root/.cache/pip/wheels/62/9c/d1/46a21e1b99e064cba79b85e9f95e6a208ac5ba4c29ae5962ec
Building wheel for yfinance (setup.py) ... [?25l[?25hdone
Created wheel for yfinance: filename=yfinance-0.1.63-py2.py3-none-any.whl size=23918 sha256=715114c7d53ca17cac554d1865cae128d831dcf4adb03a8bb4e875aa6297a36d
Stored in directory: /root/.cache/pip/wheels/fe/87/8b/7ec24486e001d3926537f5f7801f57a74d181be25b11157983
Successfully built finrl pyfolio empyrical gputil thriftpy2 gpustat trading-calendars yfinance
Installing collected packages: multidict, yarl, lxml, async-timeout, redis, pycares, ply, opencensus-context, hiredis, blessings, aiohttp, websockets, websocket-client, thriftpy2, tensorboardX, stable-baselines3, ray, pymysql, py-spy, psycopg2-binary, opencensus, mock, int-date, gpustat, empyrical, cryptography, colorful, aioredis, aiohttp-cors, aiodns, yfinance, wrds, trading-calendars, stockstats, pyfolio, lz4, jqdatasdk, gputil, ccxt, alpaca-trade-api, finrl
Attempting uninstall: lxml
Found existing installation: lxml 4.2.6
Uninstalling lxml-4.2.6:
Successfully uninstalled lxml-4.2.6
Successfully installed aiodns-3.0.0 aiohttp-3.7.4.post0 aiohttp-cors-0.7.0 aioredis-1.3.1 alpaca-trade-api-1.2.3 async-timeout-3.0.1 blessings-1.7 ccxt-1.55.84 colorful-0.5.4 cryptography-3.4.8 empyrical-0.5.5 finrl-0.3.1 gpustat-0.6.0 gputil-1.4.0 hiredis-2.0.0 int-date-0.1.8 jqdatasdk-1.8.10 lxml-4.6.3 lz4-3.1.3 mock-4.0.3 multidict-5.1.0 opencensus-0.7.13 opencensus-context-0.1.2 ply-3.11 psycopg2-binary-2.9.1 py-spy-0.3.8 pycares-4.0.0 pyfolio-0.9.2+75.g4b901f6 pymysql-1.0.2 ray-1.6.0 redis-3.5.3 stable-baselines3-1.1.0 stockstats-0.3.2 tensorboardX-2.4 thriftpy2-0.4.14 trading-calendars-2.1.1 websocket-client-1.2.1 websockets-9.1 wrds-3.1.0 yarl-1.6.3 yfinance-0.1.63
###Markdown
2.2. Check if the additional packages needed are present, if not install them. * Yahoo Finance API* pandas* numpy* matplotlib* stockstats* OpenAI gym* stable-baselines* tensorflow* pyfolio 2.3. Import Packages
###Code
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.use('Agg')
%matplotlib inline
import datetime
from finrl.apps import config
from finrl.neo_finrl.preprocessor.yahoodownloader import YahooDownloader
from finrl.neo_finrl.preprocessor.preprocessors import FeatureEngineer, data_split
from finrl.neo_finrl.env_portfolio_allocation.env_portfolio import StockPortfolioEnv
from finrl.drl_agents.stablebaselines3.models import DRLAgent
from finrl.plot import backtest_stats, backtest_plot, get_daily_return, get_baseline,convert_daily_return_to_pyfolio_ts
import sys
sys.path.append("../FinRL-Library")
###Output
/usr/local/lib/python3.7/dist-packages/pyfolio/pos.py:27: UserWarning: Module "zipline.assets" not found; multipliers will not be applied to position notionals.
'Module "zipline.assets" not found; multipliers will not be applied'
###Markdown
2.4. Create Folders
###Code
import os
if not os.path.exists("./" + config.DATA_SAVE_DIR):
os.makedirs("./" + config.DATA_SAVE_DIR)
if not os.path.exists("./" + config.TRAINED_MODEL_DIR):
os.makedirs("./" + config.TRAINED_MODEL_DIR)
if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR):
os.makedirs("./" + config.TENSORBOARD_LOG_DIR)
if not os.path.exists("./" + config.RESULTS_DIR):
os.makedirs("./" + config.RESULTS_DIR)
###Output
_____no_output_____
###Markdown
Part 3. Download DataYahoo Finance is a website that provides stock data, financial news, financial reports, etc. All the data provided by Yahoo Finance is free.* FinRL uses a class **YahooDownloader** to fetch data from Yahoo Finance API* Call Limit: Using the Public API (without authentication), you are limited to 2,000 requests per hour per IP (or up to a total of 48,000 requests a day).
###Code
print(config.DOW_30_TICKER)
df = YahooDownloader(start_date = '2008-01-01',
end_date = '2021-07-01',
ticker_list = config.DOW_30_TICKER).fetch_data()
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Part 4: Preprocess DataData preprocessing is a crucial step for training a high quality machine learning model. We need to check for missing data and do feature engineering in order to convert the data into a model-ready state.* Add technical indicators. In practical trading, various information needs to be taken into account, for example the historical stock prices, current holding shares, technical indicators, etc. In this article, we demonstrate two trend-following technical indicators: MACD and RSI.* Add turbulence index. Risk-aversion reflects whether an investor will choose to preserve the capital. It also influences one's trading strategy when facing different market volatility level. To control the risk in a worst-case scenario, such as financial crisis of 2007–2008, FinRL employs the financial turbulence index that measures extreme asset price fluctuation.
###Code
fe = FeatureEngineer(
use_technical_indicator=True,
use_turbulence=False,
user_defined_feature = False)
df = fe.preprocess_data(df)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Add covariance matrix as states
###Code
# add covariance matrix as states
df=df.sort_values(['date','tic'],ignore_index=True)
df.index = df.date.factorize()[0]
cov_list = []
return_list = []
# look back is one year
lookback=252
for i in range(lookback,len(df.index.unique())):
data_lookback = df.loc[i-lookback:i,:]
price_lookback=data_lookback.pivot_table(index = 'date',columns = 'tic', values = 'close')
return_lookback = price_lookback.pct_change().dropna()
return_list.append(return_lookback)
covs = return_lookback.cov().values
cov_list.append(covs)
df_cov = pd.DataFrame({'date':df.date.unique()[lookback:],'cov_list':cov_list,'return_list':return_list})
df = df.merge(df_cov, on='date')
df = df.sort_values(['date','tic']).reset_index(drop=True)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Part 5. Design EnvironmentConsidering the stochastic and interactive nature of the automated stock trading tasks, a financial task is modeled as a **Markov Decision Process (MDP)** problem. The training process involves observing stock price change, taking an action and reward's calculation to have the agent adjusting its strategy accordingly. By interacting with the environment, the trading agent will derive a trading strategy with the maximized rewards as time proceeds.Our trading environments, based on OpenAI Gym framework, simulate live stock markets with real market data according to the principle of time-driven simulation.The action space describes the allowed actions that the agent interacts with the environment. Normally, action a includes three actions: {-1, 0, 1}, where -1, 0, 1 represent selling, holding, and buying one share. Also, an action can be carried upon multiple shares. We use an action space {-k,…,-1, 0, 1, …, k}, where k denotes the number of shares to buy and -k denotes the number of shares to sell. For example, "Buy 10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or -10, respectively. The continuous action space needs to be normalized to [-1, 1], since the policy is defined on a Gaussian distribution, which needs to be normalized and symmetric. Training data split: 2009-01-01 to 2018-12-31
###Code
train = data_split(df, '2009-01-01','2020-07-01')
#trade = data_split(df, '2020-01-01', config.END_DATE)
train.head()
###Output
_____no_output_____
###Markdown
Environment for Portfolio Allocation
###Code
import numpy as np
import pandas as pd
from gym.utils import seeding
import gym
from gym import spaces
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from stable_baselines3.common.vec_env import DummyVecEnv
class StockPortfolioEnv(gym.Env):
"""A single stock trading environment for OpenAI gym
Attributes
----------
df: DataFrame
input data
stock_dim : int
number of unique stocks
hmax : int
maximum number of shares to trade
initial_amount : int
start money
transaction_cost_pct: float
transaction cost percentage per trade
reward_scaling: float
scaling factor for reward, good for training
state_space: int
the dimension of input features
action_space: int
equals stock dimension
tech_indicator_list: list
a list of technical indicator names
turbulence_threshold: int
a threshold to control risk aversion
day: int
an increment number to control date
Methods
-------
_sell_stock()
perform sell action based on the sign of the action
_buy_stock()
perform buy action based on the sign of the action
step()
at each step the agent will return actions, then
we will calculate the reward, and return the next observation.
reset()
reset the environment
render()
use render to return other functions
save_asset_memory()
return account value at each time step
save_action_memory()
return actions/positions at each time step
"""
metadata = {'render.modes': ['human']}
def __init__(self,
df,
stock_dim,
hmax,
initial_amount,
transaction_cost_pct,
reward_scaling,
state_space,
action_space,
tech_indicator_list,
turbulence_threshold=None,
lookback=252,
day = 0):
#super(StockEnv, self).__init__()
#money = 10 , scope = 1
self.day = day
self.lookback=lookback
self.df = df
self.stock_dim = stock_dim
self.hmax = hmax
self.initial_amount = initial_amount
self.transaction_cost_pct =transaction_cost_pct
self.reward_scaling = reward_scaling
self.state_space = state_space
self.action_space = action_space
self.tech_indicator_list = tech_indicator_list
# action_space normalization and shape is self.stock_dim
self.action_space = spaces.Box(low = 0, high = 1,shape = (self.action_space,))
# Shape = (34, 30)
# covariance matrix + technical indicators
self.observation_space = spaces.Box(low=-np.inf, high=np.inf, shape = (self.state_space+len(self.tech_indicator_list),self.state_space))
# load data from a pandas dataframe
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.terminal = False
self.turbulence_threshold = turbulence_threshold
# initalize state: inital portfolio return + individual stock return + individual weights
self.portfolio_value = self.initial_amount
# memorize portfolio value each step
self.asset_memory = [self.initial_amount]
# memorize portfolio return each step
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
def step(self, actions):
# print(self.day)
self.terminal = self.day >= len(self.df.index.unique())-1
# print(actions)
if self.terminal:
df = pd.DataFrame(self.portfolio_return_memory)
df.columns = ['pct_return']
plt.plot(df.pct_return.cumsum(),'r')
plt.savefig('results/cumulative_reward.png')
plt.close()
plt.plot(self.portfolio_return_memory,'r')
plt.savefig('results/rewards.png')
plt.close()
print("=================================")
print("begin_total_asset:{}".format(self.asset_memory[0]))
print("end_total_asset:{}".format(self.portfolio_value))
df_daily_return = pd.DataFrame(self.portfolio_return_memory)
df_daily_return.columns = ['pct_return']
if df_daily_return['pct_return'].std() !=0:
sharpe = (252**0.5)*df_daily_return['pct_return'].mean()/ \
df_daily_return['pct_return'].std()
print("Sharpe: ",sharpe)
print("=================================")
return self.state, self.reward, self.terminal,{}
else:
#print("Model actions: ",actions)
# actions are the portfolio weight
# normalize to sum of 1
#if (np.array(actions) - np.array(actions).min()).sum() != 0:
# norm_actions = (np.array(actions) - np.array(actions).min()) / (np.array(actions) - np.array(actions).min()).sum()
#else:
# norm_actions = actions
weights = self.softmax_normalization(actions)
#print("Normalized actions: ", weights)
self.actions_memory.append(weights)
last_day_memory = self.data
#load next state
self.day += 1
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
#print(self.state)
# calcualte portfolio return
# individual stocks' return * weight
portfolio_return = sum(((self.data.close.values / last_day_memory.close.values)-1)*weights)
# update portfolio value
new_portfolio_value = self.portfolio_value*(1+portfolio_return)
self.portfolio_value = new_portfolio_value
# save into memory
self.portfolio_return_memory.append(portfolio_return)
self.date_memory.append(self.data.date.unique()[0])
self.asset_memory.append(new_portfolio_value)
# the reward is the new portfolio value or end portfolo value
self.reward = new_portfolio_value
#print("Step reward: ", self.reward)
#self.reward = self.reward*self.reward_scaling
return self.state, self.reward, self.terminal, {}
def reset(self):
self.asset_memory = [self.initial_amount]
self.day = 0
self.data = self.df.loc[self.day,:]
# load states
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.portfolio_value = self.initial_amount
#self.cost = 0
#self.trades = 0
self.terminal = False
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
return self.state
def render(self, mode='human'):
return self.state
def softmax_normalization(self, actions):
numerator = np.exp(actions)
denominator = np.sum(np.exp(actions))
softmax_output = numerator/denominator
return softmax_output
def save_asset_memory(self):
date_list = self.date_memory
portfolio_return = self.portfolio_return_memory
#print(len(date_list))
#print(len(asset_list))
df_account_value = pd.DataFrame({'date':date_list,'pct_return':portfolio_return})
return df_account_value
def save_action_memory(self):
# date and close price length must match actions length
date_list = self.date_memory
df_date = pd.DataFrame(date_list)
df_date.columns = ['date']
action_list = self.actions_memory
df_actions = pd.DataFrame(action_list)
df_actions.columns = self.data.tic.values
df_actions.index = df_date.date
#df_actions = pd.DataFrame({'date':date_list,'actions':action_list})
return df_actions
def _seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def get_sb_env(self):
e = DummyVecEnv([lambda: self])
obs = e.reset()
return e, obs
stock_dimension = len(train.tic.unique())
state_space = stock_dimension
print(f"Stock Dimension: {stock_dimension}, State Space: {state_space}")
env_kwargs = {
"hmax": 100,
"initial_amount": 1000000,
"transaction_cost_pct": 0.001,
"state_space": state_space,
"stock_dim": stock_dimension,
"tech_indicator_list": config.TECHNICAL_INDICATORS_LIST,
"action_space": stock_dimension,
"reward_scaling": 1e-4
}
e_train_gym = StockPortfolioEnv(df = train, **env_kwargs)
env_train, _ = e_train_gym.get_sb_env()
print(type(env_train))
###Output
<class 'stable_baselines3.common.vec_env.dummy_vec_env.DummyVecEnv'>
###Markdown
Part 6: Implement DRL Algorithms* The implementation of the DRL algorithms are based on **OpenAI Baselines** and **Stable Baselines**. Stable Baselines is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups.* FinRL library includes fine-tuned standard DRL algorithms, such as DQN, DDPG,Multi-Agent DDPG, PPO, SAC, A2C and TD3. We also allow users todesign their own DRL algorithms by adapting these DRL algorithms.
###Code
# initialize
agent = DRLAgent(env = env_train)
###Output
_____no_output_____
###Markdown
Model 1: **A2C**
###Code
agent = DRLAgent(env = env_train)
A2C_PARAMS = {"n_steps": 5, "ent_coef": 0.005, "learning_rate": 0.0002}
model_a2c = agent.get_model(model_name="a2c",model_kwargs = A2C_PARAMS)
trained_a2c = agent.train_model(model=model_a2c,
tb_log_name='a2c',
total_timesteps=50000)
trained_a2c.save('/content/trained_models/trained_a2c.zip')
###Output
_____no_output_____
###Markdown
Model 2: **PPO**
###Code
agent = DRLAgent(env = env_train)
PPO_PARAMS = {
"n_steps": 2048,
"ent_coef": 0.005,
"learning_rate": 0.0001,
"batch_size": 128,
}
model_ppo = agent.get_model("ppo",model_kwargs = PPO_PARAMS)
trained_ppo = agent.train_model(model=model_ppo,
tb_log_name='ppo',
total_timesteps=80000)
trained_ppo.save('/content/trained_models/trained_ppo.zip')
###Output
_____no_output_____
###Markdown
Model 3: **DDPG**
###Code
agent = DRLAgent(env = env_train)
DDPG_PARAMS = {"batch_size": 128, "buffer_size": 50000, "learning_rate": 0.001}
model_ddpg = agent.get_model("ddpg",model_kwargs = DDPG_PARAMS)
trained_ddpg = agent.train_model(model=model_ddpg,
tb_log_name='ddpg',
total_timesteps=50000)
trained_ddpg.save('/content/trained_models/trained_ddpg.zip')
###Output
_____no_output_____
###Markdown
Model 4: **SAC**
###Code
agent = DRLAgent(env = env_train)
SAC_PARAMS = {
"batch_size": 128,
"buffer_size": 100000,
"learning_rate": 0.0003,
"learning_starts": 100,
"ent_coef": "auto_0.1",
}
model_sac = agent.get_model("sac",model_kwargs = SAC_PARAMS)
trained_sac = agent.train_model(model=model_sac,
tb_log_name='sac',
total_timesteps=50000)
trained_sac.save('/content/trained_models/trained_sac.zip')
###Output
_____no_output_____
###Markdown
Model 5: **TD3**
###Code
agent = DRLAgent(env = env_train)
TD3_PARAMS = {"batch_size": 100,
"buffer_size": 1000000,
"learning_rate": 0.001}
model_td3 = agent.get_model("td3",model_kwargs = TD3_PARAMS)
trained_td3 = agent.train_model(model=model_td3,
tb_log_name='td3',
total_timesteps=30000)
trained_td3.save('/content/trained_models/trained_td3.zip')
###Output
_____no_output_____
###Markdown
TradingAssume that we have $1,000,000 initial capital at 2019-01-01. We use the DDPG model to trade Dow jones 30 stocks.
###Code
trade = data_split(df,'2020-07-01', '2021-07-01')
e_trade_gym = StockPortfolioEnv(df = trade, **env_kwargs)
trade.shape
df_daily_return, df_actions = DRLAgent.DRL_prediction(model=trained_a2c,
environment = e_trade_gym)
df_daily_return.head()
df_daily_return.to_csv('df_daily_return.csv')
df_actions.head()
df_actions.to_csv('df_actions.csv')
###Output
_____no_output_____
###Markdown
Part 7: Backtest Our StrategyBacktesting plays a key role in evaluating the performance of a trading strategy. Automated backtesting tool is preferred because it reduces the human error. We usually use the Quantopian pyfolio package to backtest our trading strategies. It is easy to use and consists of various individual plots that provide a comprehensive image of the performance of a trading strategy. 7.1 BackTestStatspass in df_account_value, this information is stored in env class
###Code
from pyfolio import timeseries
DRL_strat = convert_daily_return_to_pyfolio_ts(df_daily_return)
perf_func = timeseries.perf_stats
perf_stats_all = perf_func( returns=DRL_strat,
factor_returns=DRL_strat,
positions=None, transactions=None, turnover_denom="AGB")
print("==============DRL Strategy Stats===========")
perf_stats_all
#baseline stats
print("==============Get Baseline Stats===========")
baseline_df = get_baseline(
ticker="^DJI",
start = df_daily_return.loc[0,'date'],
end = df_daily_return.loc[len(df_daily_return)-1,'date'])
stats = backtest_stats(baseline_df, value_col_name = 'close')
###Output
==============Get Baseline Stats===========
[*********************100%***********************] 1 of 1 completed
Shape of DataFrame: (251, 8)
Annual return 0.334042
Cumulative returns 0.332517
Annual volatility 0.146033
Sharpe ratio 2.055458
Calmar ratio 3.740347
Stability 0.945402
Max drawdown -0.089308
Omega ratio 1.408111
Sortino ratio 3.075978
Skew NaN
Kurtosis NaN
Tail ratio 1.078766
Daily value at risk -0.017207
dtype: float64
###Markdown
7.2 BackTestPlot
###Code
import pyfolio
%matplotlib inline
baseline_df = get_baseline(
ticker='^DJI', start=df_daily_return.loc[0,'date'], end='2021-07-01'
)
baseline_returns = get_daily_return(baseline_df, value_col_name="close")
with pyfolio.plotting.plotting_context(font_scale=1.1):
pyfolio.create_full_tear_sheet(returns = DRL_strat,
benchmark_rets=baseline_returns, set_context=False)
###Output
_____no_output_____
###Markdown
Min-Variance Portfolio Allocation
###Code
!pip install PyPortfolioOpt
from pypfopt.efficient_frontier import EfficientFrontier
from pypfopt import risk_models
unique_tic = trade.tic.unique()
unique_trade_date = trade.date.unique()
df.head()
#calculate_portfolio_minimum_variance
portfolio = pd.DataFrame(index = range(1), columns = unique_trade_date)
initial_capital = 1000000
portfolio.loc[0,unique_trade_date[0]] = initial_capital
for i in range(len( unique_trade_date)-1):
df_temp = df[df.date==unique_trade_date[i]].reset_index(drop=True)
df_temp_next = df[df.date==unique_trade_date[i+1]].reset_index(drop=True)
#Sigma = risk_models.sample_cov(df_temp.return_list[0])
#calculate covariance matrix
Sigma = df_temp.return_list[0].cov()
#portfolio allocation
ef_min_var = EfficientFrontier(None, Sigma,weight_bounds=(0, 0.1))
#minimum variance
raw_weights_min_var = ef_min_var.min_volatility()
#get weights
cleaned_weights_min_var = ef_min_var.clean_weights()
#current capital
cap = portfolio.iloc[0, i]
#current cash invested for each stock
current_cash = [element * cap for element in list(cleaned_weights_min_var.values())]
# current held shares
current_shares = list(np.array(current_cash)
/ np.array(df_temp.close))
# next time period price
next_price = np.array(df_temp_next.close)
##next_price * current share to calculate next total account value
portfolio.iloc[0, i+1] = np.dot(current_shares, next_price)
portfolio=portfolio.T
portfolio.columns = ['account_value']
portfolio.head()
a2c_cumpod =(df_daily_return.pct_return+1).cumprod()-1
min_var_cumpod =(portfolio.account_value.pct_change()+1).cumprod()-1
dji_cumpod =(baseline_returns+1).cumprod()-1
###Output
_____no_output_____
###Markdown
Plotly: DRL, Min-Variance, DJIA
###Code
from datetime import datetime as dt
import matplotlib.pyplot as plt
import plotly
import plotly.graph_objs as go
time_ind = pd.Series(df_daily_return.date)
trace0_portfolio = go.Scatter(x = time_ind, y = a2c_cumpod, mode = 'lines', name = 'A2C (Portfolio Allocation)')
trace1_portfolio = go.Scatter(x = time_ind, y = dji_cumpod, mode = 'lines', name = 'DJIA')
trace2_portfolio = go.Scatter(x = time_ind, y = min_var_cumpod, mode = 'lines', name = 'Min-Variance')
#trace3_portfolio = go.Scatter(x = time_ind, y = ddpg_cumpod, mode = 'lines', name = 'DDPG')
#trace4_portfolio = go.Scatter(x = time_ind, y = addpg_cumpod, mode = 'lines', name = 'Adaptive-DDPG')
#trace5_portfolio = go.Scatter(x = time_ind, y = min_cumpod, mode = 'lines', name = 'Min-Variance')
#trace4 = go.Scatter(x = time_ind, y = addpg_cumpod, mode = 'lines', name = 'Adaptive-DDPG')
#trace2 = go.Scatter(x = time_ind, y = portfolio_cost_minv, mode = 'lines', name = 'Min-Variance')
#trace3 = go.Scatter(x = time_ind, y = spx_value, mode = 'lines', name = 'SPX')
fig = go.Figure()
fig.add_trace(trace0_portfolio)
fig.add_trace(trace1_portfolio)
fig.add_trace(trace2_portfolio)
fig.update_layout(
legend=dict(
x=0,
y=1,
traceorder="normal",
font=dict(
family="sans-serif",
size=15,
color="black"
),
bgcolor="White",
bordercolor="white",
borderwidth=2
),
)
#fig.update_layout(legend_orientation="h")
fig.update_layout(title={
#'text': "Cumulative Return using FinRL",
'y':0.85,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'})
#with Transaction cost
#fig.update_layout(title = 'Quarterly Trade Date')
fig.update_layout(
# margin=dict(l=20, r=20, t=20, b=20),
paper_bgcolor='rgba(1,1,0,0)',
plot_bgcolor='rgba(1, 1, 0, 0)',
#xaxis_title="Date",
yaxis_title="Cumulative Return",
xaxis={'type': 'date',
'tick0': time_ind[0],
'tickmode': 'linear',
'dtick': 86400000.0 *80}
)
fig.update_xaxes(showline=True,linecolor='black',showgrid=True, gridwidth=1, gridcolor='LightSteelBlue',mirror=True)
fig.update_yaxes(showline=True,linecolor='black',showgrid=True, gridwidth=1, gridcolor='LightSteelBlue',mirror=True)
fig.update_yaxes(zeroline=True, zerolinewidth=1, zerolinecolor='LightSteelBlue')
fig.show()
###Output
_____no_output_____
###Markdown
Deep Reinforcement Learning for Stock Trading from Scratch: Portfolio AllocationTutorials to use OpenAI DRL to perform portfolio allocation in one Jupyter Notebook | Presented at NeurIPS 2020: Deep RL Workshop* This blog is based on our paper: FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance, presented at NeurIPS 2020: Deep RL Workshop.* Check out medium blog for detailed explanations: * Please report any issues to our Github: https://github.com/AI4Finance-LLC/FinRL-Library/issues* **Pytorch Version** Content * [1. Problem Definition](0)* [2. Getting Started - Load Python packages](1) * [2.1. Install Packages](1.1) * [2.2. Check Additional Packages](1.2) * [2.3. Import Packages](1.3) * [2.4. Create Folders](1.4)* [3. Download Data](2)* [4. Preprocess Data](3) * [4.1. Technical Indicators](3.1) * [4.2. Perform Feature Engineering](3.2)* [5.Build Environment](4) * [5.1. Training & Trade Data Split](4.1) * [5.2. User-defined Environment](4.2) * [5.3. Initialize Environment](4.3) * [6.Implement DRL Algorithms](5) * [7.Backtesting Performance](6) * [7.1. BackTestStats](6.1) * [7.2. BackTestPlot](6.2) * [7.3. Baseline Stats](6.3) * [7.3. Compare to Stock Market Index](6.4)
Part 1. Problem Definition This problem is to design an automated trading solution for single stock trading. We model the stock trading process as a Markov Decision Process (MDP). We then formulate our trading goal as a maximization problem.
The algorithm is trained using Deep Reinforcement Learning (DRL) algorithms and the components of the reinforcement learning environment are:
* Action: The action space describes the allowed actions that the agent interacts with the
environment. Normally, a ∈ A includes three actions: a ∈ {−1, 0, 1}, where −1, 0, 1 represent
selling, holding, and buying one stock. Also, an action can be carried upon multiple shares. We use
an action space {−k, ..., −1, 0, 1, ..., k}, where k denotes the number of shares. For example, "Buy
10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or −10, respectively
* Reward function: r(s, a, s′) is the incentive mechanism for an agent to learn a better action. The change of the portfolio value when action a is taken at state s and arriving at new state s', i.e., r(s, a, s′) = v′ − v, where v′ and v represent the portfolio
values at state s′ and s, respectively
* State: The state space describes the observations that the agent receives from the environment. Just as a human trader needs to analyze various information before executing a trade, so
our trading agent observes many different features to better learn in an interactive environment.
* Environment: Dow 30 consituents
The data of the single stock that we will be using for this case study is obtained from Yahoo Finance API. The data contains Open-High-Low-Close price and volume.
Part 2. Getting Started- Load Python Packages
2.1. Install all the packages through FinRL library
###Code
## install finrl library
!pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
###Output
Collecting git+https://github.com/AI4Finance-LLC/FinRL-Library.git
Cloning https://github.com/AI4Finance-LLC/FinRL-Library.git to /tmp/pip-req-build-bwdyljxc
Running command git clone -q https://github.com/AI4Finance-LLC/FinRL-Library.git /tmp/pip-req-build-bwdyljxc
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (1.19.5)
Requirement already satisfied: pandas>=1.1.5 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (1.1.5)
Collecting stockstats
Downloading https://files.pythonhosted.org/packages/32/41/d3828c5bc0a262cb3112a4024108a3b019c183fa3b3078bff34bf25abf91/stockstats-0.3.2-py2.py3-none-any.whl
Collecting yfinance
Downloading https://files.pythonhosted.org/packages/7a/e8/b9d7104d3a4bf39924799067592d9e59119fcfc900a425a12e80a3123ec8/yfinance-0.1.55.tar.gz
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (3.2.2)
Requirement already satisfied: scikit-learn>=0.21.0 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (0.22.2.post1)
Requirement already satisfied: gym>=0.17 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (0.17.3)
Collecting stable-baselines3[extra]
[?25l Downloading https://files.pythonhosted.org/packages/76/7c/ec89fd9a51c2ff640f150479069be817136c02f02349b5dd27a6e3bb8b3d/stable_baselines3-0.10.0-py3-none-any.whl (145kB)
[K |████████████████████████████████| 153kB 6.0MB/s
[?25hRequirement already satisfied: pytest in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (3.6.4)
Requirement already satisfied: setuptools>=41.4.0 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (53.0.0)
Requirement already satisfied: wheel>=0.33.6 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (0.36.2)
Collecting pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2
Cloning https://github.com/quantopian/pyfolio.git to /tmp/pip-install-jk1inqx3/pyfolio
Running command git clone -q https://github.com/quantopian/pyfolio.git /tmp/pip-install-jk1inqx3/pyfolio
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=1.1.5->finrl==0.0.3) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=1.1.5->finrl==0.0.3) (2018.9)
Collecting int-date>=0.1.7
Downloading https://files.pythonhosted.org/packages/43/27/31803df15173ab341fe7548c14154b54227dfd8f630daa09a1c6e7db52f7/int_date-0.1.8-py2.py3-none-any.whl
Requirement already satisfied: requests>=2.20 in /usr/local/lib/python3.7/dist-packages (from yfinance->finrl==0.0.3) (2.23.0)
Requirement already satisfied: multitasking>=0.0.7 in /usr/local/lib/python3.7/dist-packages (from yfinance->finrl==0.0.3) (0.0.9)
Collecting lxml>=4.5.1
[?25l Downloading https://files.pythonhosted.org/packages/d2/88/b25778f17e5320c1c58f8c5060fb5b037288e162bd7554c30799e9ea90db/lxml-4.6.2-cp37-cp37m-manylinux1_x86_64.whl (5.5MB)
[K |████████████████████████████████| 5.5MB 8.8MB/s
[?25hRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.0.3) (2.4.7)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.0.3) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.0.3) (1.3.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.0->finrl==0.0.3) (1.0.1)
Requirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.0->finrl==0.0.3) (1.4.1)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym>=0.17->finrl==0.0.3) (1.5.0)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym>=0.17->finrl==0.0.3) (1.3.0)
Requirement already satisfied: torch>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (1.7.0+cu101)
Requirement already satisfied: tensorboard; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (2.4.1)
Requirement already satisfied: psutil; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (5.4.8)
Requirement already satisfied: opencv-python; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (4.1.2.30)
Requirement already satisfied: pillow; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (7.0.0)
Requirement already satisfied: atari-py~=0.2.0; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (0.2.6)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (1.15.0)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (20.3.0)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (8.7.0)
Requirement already satisfied: pluggy<0.8,>=0.5 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (0.7.1)
Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (1.10.0)
Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (1.4.0)
Requirement already satisfied: ipython>=3.2.3 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (5.5.0)
Requirement already satisfied: seaborn>=0.7.1 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.11.1)
Collecting empyrical>=0.5.0
[?25l Downloading https://files.pythonhosted.org/packages/74/43/1b997c21411c6ab7c96dc034e160198272c7a785aeea7654c9bcf98bec83/empyrical-0.5.5.tar.gz (52kB)
[K |████████████████████████████████| 61kB 6.1MB/s
[?25hRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.0.3) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.0.3) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.0.3) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.0.3) (2020.12.5)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym>=0.17->finrl==0.0.3) (0.16.0)
Requirement already satisfied: dataclasses in /usr/local/lib/python3.7/dist-packages (from torch>=1.4.0->stable-baselines3[extra]->finrl==0.0.3) (0.6)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch>=1.4.0->stable-baselines3[extra]->finrl==0.0.3) (3.7.4.3)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.12.4)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.32.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (0.4.2)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.3.3)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (0.10.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.8.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.0.1)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.27.0)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.7.5)
Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (4.8.0)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (1.0.18)
Requirement already satisfied: pygments in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (2.6.1)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (4.3.3)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.8.1)
Requirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (4.4.2)
Requirement already satisfied: pandas-datareader>=0.2 in /usr/local/lib/python3.7/dist-packages (from empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.9.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.3.0)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.4.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (0.2.8)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3.6" in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (4.7.1)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (4.2.1)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.7/dist-packages (from pexpect; sys_platform != "win32"->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.7.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.2.5)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.7/dist-packages (from traitlets>=4.2->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.2.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.4.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (0.4.8)
Building wheels for collected packages: finrl, yfinance, pyfolio, empyrical
Building wheel for finrl (setup.py) ... [?25l[?25hdone
Created wheel for finrl: filename=finrl-0.0.3-cp37-none-any.whl size=38201 sha256=680913f069c396f38e0c508600b450102190f08e0b0bba53c58c334981ccbe6c
Stored in directory: /tmp/pip-ephem-wheel-cache-a1bbwmjm/wheels/9c/19/bf/c644def96612df1ad42c94d5304966797eaa3221dffc5efe0b
Building wheel for yfinance (setup.py) ... [?25l[?25hdone
Created wheel for yfinance: filename=yfinance-0.1.55-py2.py3-none-any.whl size=22616 sha256=2a578f51d56d3d8fff23683c041d6815f487abf3c6c97d4739d122055a6599b3
Stored in directory: /root/.cache/pip/wheels/04/98/cc/2702a4242d60bdc14f48b4557c427ded1fe92aedf257d4565c
Building wheel for pyfolio (setup.py) ... [?25l[?25hdone
Created wheel for pyfolio: filename=pyfolio-0.9.2+75.g4b901f6-cp37-none-any.whl size=75764 sha256=7e1ceb3360e57235c3d97bdbb36969c8ac05da709aa781413f1eca9088669323
Stored in directory: /tmp/pip-ephem-wheel-cache-a1bbwmjm/wheels/43/ce/d9/6752fb6e03205408773235435205a0519d2c608a94f1976e56
Building wheel for empyrical (setup.py) ... [?25l[?25hdone
Created wheel for empyrical: filename=empyrical-0.5.5-cp37-none-any.whl size=39764 sha256=6b772c8c03b900a08799fdd831ee627277cc2c9241dc3103e2602fdd21781bb1
Stored in directory: /root/.cache/pip/wheels/ea/b2/c8/6769d8444d2f2e608fae2641833110668d0ffd1abeb2e9f3fc
Successfully built finrl yfinance pyfolio empyrical
Installing collected packages: int-date, stockstats, lxml, yfinance, stable-baselines3, empyrical, pyfolio, finrl
Found existing installation: lxml 4.2.6
Uninstalling lxml-4.2.6:
Successfully uninstalled lxml-4.2.6
Successfully installed empyrical-0.5.5 finrl-0.0.3 int-date-0.1.8 lxml-4.6.2 pyfolio-0.9.2+75.g4b901f6 stable-baselines3-0.10.0 stockstats-0.3.2 yfinance-0.1.55
###Markdown
2.2. Check if the additional packages needed are present, if not install them. * Yahoo Finance API* pandas* numpy* matplotlib* stockstats* OpenAI gym* stable-baselines* tensorflow* pyfolio
2.3. Import Packages
###Code
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.use('Agg')
import datetime
from finrl.config import config
from finrl.marketdata.yahoodownloader import YahooDownloader
from finrl.preprocessing.preprocessors import FeatureEngineer
from finrl.preprocessing.data import data_split
from finrl.env.env_portfolio import StockPortfolioEnv
from finrl.model.models import DRLAgent
from finrl.trade.backtest import backtest_stats, backtest_plot, get_daily_return, get_baseline,convert_daily_return_to_pyfolio_ts
import sys
sys.path.append("../FinRL-Library")
###Output
/usr/local/lib/python3.7/dist-packages/pyfolio/pos.py:27: UserWarning: Module "zipline.assets" not found; multipliers will not be applied to position notionals.
'Module "zipline.assets" not found; multipliers will not be applied'
###Markdown
2.4. Create Folders
###Code
import os
if not os.path.exists("./" + config.DATA_SAVE_DIR):
os.makedirs("./" + config.DATA_SAVE_DIR)
if not os.path.exists("./" + config.TRAINED_MODEL_DIR):
os.makedirs("./" + config.TRAINED_MODEL_DIR)
if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR):
os.makedirs("./" + config.TENSORBOARD_LOG_DIR)
if not os.path.exists("./" + config.RESULTS_DIR):
os.makedirs("./" + config.RESULTS_DIR)
###Output
_____no_output_____
###Markdown
Part 3. Download DataYahoo Finance is a website that provides stock data, financial news, financial reports, etc. All the data provided by Yahoo Finance is free.* FinRL uses a class **YahooDownloader** to fetch data from Yahoo Finance API* Call Limit: Using the Public API (without authentication), you are limited to 2,000 requests per hour per IP (or up to a total of 48,000 requests a day).
###Code
print(config.DOW_30_TICKER)
df = YahooDownloader(start_date = '2008-01-01',
end_date = '2021-01-01',
ticker_list = config.DOW_30_TICKER).fetch_data()
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Part 4: Preprocess DataData preprocessing is a crucial step for training a high quality machine learning model. We need to check for missing data and do feature engineering in order to convert the data into a model-ready state.* Add technical indicators. In practical trading, various information needs to be taken into account, for example the historical stock prices, current holding shares, technical indicators, etc. In this article, we demonstrate two trend-following technical indicators: MACD and RSI.* Add turbulence index. Risk-aversion reflects whether an investor will choose to preserve the capital. It also influences one's trading strategy when facing different market volatility level. To control the risk in a worst-case scenario, such as financial crisis of 2007–2008, FinRL employs the financial turbulence index that measures extreme asset price fluctuation.
###Code
fe = FeatureEngineer(
use_technical_indicator=True,
use_turbulence=False,
user_defined_feature = False)
df = fe.preprocess_data(df)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Add covariance matrix as states
###Code
# add covariance matrix as states
df=df.sort_values(['date','tic'],ignore_index=True)
df.index = df.date.factorize()[0]
cov_list = []
# look back is one year
lookback=252
for i in range(lookback,len(df.index.unique())):
data_lookback = df.loc[i-lookback:i,:]
price_lookback=data_lookback.pivot_table(index = 'date',columns = 'tic', values = 'close')
return_lookback = price_lookback.pct_change().dropna()
covs = return_lookback.cov().values
cov_list.append(covs)
df_cov = pd.DataFrame({'date':df.date.unique()[lookback:],'cov_list':cov_list})
df = df.merge(df_cov, on='date')
df = df.sort_values(['date','tic']).reset_index(drop=True)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Part 5. Design EnvironmentConsidering the stochastic and interactive nature of the automated stock trading tasks, a financial task is modeled as a **Markov Decision Process (MDP)** problem. The training process involves observing stock price change, taking an action and reward's calculation to have the agent adjusting its strategy accordingly. By interacting with the environment, the trading agent will derive a trading strategy with the maximized rewards as time proceeds.Our trading environments, based on OpenAI Gym framework, simulate live stock markets with real market data according to the principle of time-driven simulation.The action space describes the allowed actions that the agent interacts with the environment. Normally, action a includes three actions: {-1, 0, 1}, where -1, 0, 1 represent selling, holding, and buying one share. Also, an action can be carried upon multiple shares. We use an action space {-k,…,-1, 0, 1, …, k}, where k denotes the number of shares to buy and -k denotes the number of shares to sell. For example, "Buy 10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or -10, respectively. The continuous action space needs to be normalized to [-1, 1], since the policy is defined on a Gaussian distribution, which needs to be normalized and symmetric. Training data split: 2009-01-01 to 2018-12-31
###Code
train = data_split(df, '2009-01-01','2019-01-01')
#trade = data_split(df, '2020-01-01', config.END_DATE)
train.head()
###Output
_____no_output_____
###Markdown
Environment for Portfolio Allocation
###Code
import numpy as np
import pandas as pd
from gym.utils import seeding
import gym
from gym import spaces
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from stable_baselines3.common.vec_env import DummyVecEnv
class StockPortfolioEnv(gym.Env):
"""A single stock trading environment for OpenAI gym
Attributes
----------
df: DataFrame
input data
stock_dim : int
number of unique stocks
hmax : int
maximum number of shares to trade
initial_amount : int
start money
transaction_cost_pct: float
transaction cost percentage per trade
reward_scaling: float
scaling factor for reward, good for training
state_space: int
the dimension of input features
action_space: int
equals stock dimension
tech_indicator_list: list
a list of technical indicator names
turbulence_threshold: int
a threshold to control risk aversion
day: int
an increment number to control date
Methods
-------
_sell_stock()
perform sell action based on the sign of the action
_buy_stock()
perform buy action based on the sign of the action
step()
at each step the agent will return actions, then
we will calculate the reward, and return the next observation.
reset()
reset the environment
render()
use render to return other functions
save_asset_memory()
return account value at each time step
save_action_memory()
return actions/positions at each time step
"""
metadata = {'render.modes': ['human']}
def __init__(self,
df,
stock_dim,
hmax,
initial_amount,
transaction_cost_pct,
reward_scaling,
state_space,
action_space,
tech_indicator_list,
turbulence_threshold=None,
lookback=252,
day = 0):
#super(StockEnv, self).__init__()
#money = 10 , scope = 1
self.day = day
self.lookback=lookback
self.df = df
self.stock_dim = stock_dim
self.hmax = hmax
self.initial_amount = initial_amount
self.transaction_cost_pct =transaction_cost_pct
self.reward_scaling = reward_scaling
self.state_space = state_space
self.action_space = action_space
self.tech_indicator_list = tech_indicator_list
# action_space normalization and shape is self.stock_dim
self.action_space = spaces.Box(low = 0, high = 1,shape = (self.action_space,))
# Shape = (34, 30)
# covariance matrix + technical indicators
self.observation_space = spaces.Box(low=-np.inf, high=np.inf, shape = (self.state_space+len(self.tech_indicator_list),self.state_space))
# load data from a pandas dataframe
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.terminal = False
self.turbulence_threshold = turbulence_threshold
# initalize state: inital portfolio return + individual stock return + individual weights
self.portfolio_value = self.initial_amount
# memorize portfolio value each step
self.asset_memory = [self.initial_amount]
# memorize portfolio return each step
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
def step(self, actions):
# print(self.day)
self.terminal = self.day >= len(self.df.index.unique())-1
# print(actions)
if self.terminal:
df = pd.DataFrame(self.portfolio_return_memory)
df.columns = ['daily_return']
plt.plot(df.daily_return.cumsum(),'r')
plt.savefig('results/cumulative_reward.png')
plt.close()
plt.plot(self.portfolio_return_memory,'r')
plt.savefig('results/rewards.png')
plt.close()
print("=================================")
print("begin_total_asset:{}".format(self.asset_memory[0]))
print("end_total_asset:{}".format(self.portfolio_value))
df_daily_return = pd.DataFrame(self.portfolio_return_memory)
df_daily_return.columns = ['daily_return']
if df_daily_return['daily_return'].std() !=0:
sharpe = (252**0.5)*df_daily_return['daily_return'].mean()/ \
df_daily_return['daily_return'].std()
print("Sharpe: ",sharpe)
print("=================================")
return self.state, self.reward, self.terminal,{}
else:
#print("Model actions: ",actions)
# actions are the portfolio weight
# normalize to sum of 1
#if (np.array(actions) - np.array(actions).min()).sum() != 0:
# norm_actions = (np.array(actions) - np.array(actions).min()) / (np.array(actions) - np.array(actions).min()).sum()
#else:
# norm_actions = actions
weights = self.softmax_normalization(actions)
#print("Normalized actions: ", weights)
self.actions_memory.append(weights)
last_day_memory = self.data
#load next state
self.day += 1
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
#print(self.state)
# calcualte portfolio return
# individual stocks' return * weight
portfolio_return = sum(((self.data.close.values / last_day_memory.close.values)-1)*weights)
# update portfolio value
new_portfolio_value = self.portfolio_value*(1+portfolio_return)
self.portfolio_value = new_portfolio_value
# save into memory
self.portfolio_return_memory.append(portfolio_return)
self.date_memory.append(self.data.date.unique()[0])
self.asset_memory.append(new_portfolio_value)
# the reward is the new portfolio value or end portfolo value
self.reward = new_portfolio_value
#print("Step reward: ", self.reward)
#self.reward = self.reward*self.reward_scaling
return self.state, self.reward, self.terminal, {}
def reset(self):
self.asset_memory = [self.initial_amount]
self.day = 0
self.data = self.df.loc[self.day,:]
# load states
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.portfolio_value = self.initial_amount
#self.cost = 0
#self.trades = 0
self.terminal = False
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
return self.state
def render(self, mode='human'):
return self.state
def softmax_normalization(self, actions):
numerator = np.exp(actions)
denominator = np.sum(np.exp(actions))
softmax_output = numerator/denominator
return softmax_output
def save_asset_memory(self):
date_list = self.date_memory
portfolio_return = self.portfolio_return_memory
#print(len(date_list))
#print(len(asset_list))
df_account_value = pd.DataFrame({'date':date_list,'daily_return':portfolio_return})
return df_account_value
def save_action_memory(self):
# date and close price length must match actions length
date_list = self.date_memory
df_date = pd.DataFrame(date_list)
df_date.columns = ['date']
action_list = self.actions_memory
df_actions = pd.DataFrame(action_list)
df_actions.columns = self.data.tic.values
df_actions.index = df_date.date
#df_actions = pd.DataFrame({'date':date_list,'actions':action_list})
return df_actions
def _seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def get_sb_env(self):
e = DummyVecEnv([lambda: self])
obs = e.reset()
return e, obs
stock_dimension = len(train.tic.unique())
state_space = stock_dimension
print(f"Stock Dimension: {stock_dimension}, State Space: {state_space}")
env_kwargs = {
"hmax": 100,
"initial_amount": 1000000,
"transaction_cost_pct": 0.001,
"state_space": state_space,
"stock_dim": stock_dimension,
"tech_indicator_list": config.TECHNICAL_INDICATORS_LIST,
"action_space": stock_dimension,
"reward_scaling": 1e-4
}
e_train_gym = StockPortfolioEnv(df = train, **env_kwargs)
env_train, _ = e_train_gym.get_sb_env()
print(type(env_train))
###Output
<class 'stable_baselines3.common.vec_env.dummy_vec_env.DummyVecEnv'>
###Markdown
Part 6: Implement DRL Algorithms
* The implementation of the DRL algorithms are based on **OpenAI Baselines** and **Stable Baselines**. Stable Baselines is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups.
* FinRL library includes fine-tuned standard DRL algorithms, such as DQN, DDPG,
Multi-Agent DDPG, PPO, SAC, A2C and TD3. We also allow users to
design their own DRL algorithms by adapting these DRL algorithms.
###Code
# initialize
agent = DRLAgent(env = env_train)
###Output
_____no_output_____
###Markdown
Model 1: **A2C**
###Code
agent = DRLAgent(env = env_train)
A2C_PARAMS = {"n_steps": 5, "ent_coef": 0.005, "learning_rate": 0.0002}
model_a2c = agent.get_model(model_name="a2c",model_kwargs = A2C_PARAMS)
trained_a2c = agent.train_model(model=model_a2c,
tb_log_name='a2c',
total_timesteps=60000)
###Output
Logging to tensorboard_log/a2c/a2c_1
-------------------------------------
| time/ | |
| fps | 130 |
| iterations | 100 |
| time_elapsed | 3 |
| total_timesteps | 500 |
| train/ | |
| entropy_loss | -42.5 |
| explained_variance | -4.23e+15 |
| learning_rate | 0.0002 |
| n_updates | 99 |
| policy_loss | 1.8e+08 |
| std | 0.997 |
| value_loss | 2.48e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 157 |
| iterations | 200 |
| time_elapsed | 6 |
| total_timesteps | 1000 |
| train/ | |
| entropy_loss | -42.5 |
| explained_variance | -7.89e+14 |
| learning_rate | 0.0002 |
| n_updates | 199 |
| policy_loss | 2.44e+08 |
| std | 0.997 |
| value_loss | 4.08e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 167 |
| iterations | 300 |
| time_elapsed | 8 |
| total_timesteps | 1500 |
| train/ | |
| entropy_loss | -42.5 |
| explained_variance | -9.77e+25 |
| learning_rate | 0.0002 |
| n_updates | 299 |
| policy_loss | 4.02e+08 |
| std | 0.997 |
| value_loss | 9.82e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 179 |
| iterations | 400 |
| time_elapsed | 11 |
| total_timesteps | 2000 |
| train/ | |
| entropy_loss | -42.5 |
| explained_variance | -6.9e+16 |
| learning_rate | 0.0002 |
| n_updates | 399 |
| policy_loss | 4.57e+08 |
| std | 0.997 |
| value_loss | 1.39e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 189 |
| iterations | 500 |
| time_elapsed | 13 |
| total_timesteps | 2500 |
| train/ | |
| entropy_loss | -42.5 |
| explained_variance | -4.81e+17 |
| learning_rate | 0.0002 |
| n_updates | 499 |
| policy_loss | 6.13e+08 |
| std | 0.996 |
| value_loss | 2.53e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4550666.315740787
Sharpe: 1.0302838133559835
=================================
------------------------------------
| time/ | |
| fps | 192 |
| iterations | 600 |
| time_elapsed | 15 |
| total_timesteps | 3000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 599 |
| policy_loss | 1.96e+08 |
| std | 0.996 |
| value_loss | 2.53e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 197 |
| iterations | 700 |
| time_elapsed | 17 |
| total_timesteps | 3500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | -2.18e+17 |
| learning_rate | 0.0002 |
| n_updates | 699 |
| policy_loss | 2.37e+08 |
| std | 0.996 |
| value_loss | 4.06e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 202 |
| iterations | 800 |
| time_elapsed | 19 |
| total_timesteps | 4000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 799 |
| policy_loss | 3.7e+08 |
| std | 0.995 |
| value_loss | 1.01e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 206 |
| iterations | 900 |
| time_elapsed | 21 |
| total_timesteps | 4500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 899 |
| policy_loss | 4.31e+08 |
| std | 0.995 |
| value_loss | 1.29e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 208 |
| iterations | 1000 |
| time_elapsed | 23 |
| total_timesteps | 5000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | -1.18e+18 |
| learning_rate | 0.0002 |
| n_updates | 999 |
| policy_loss | 6.01e+08 |
| std | 0.995 |
| value_loss | 2.52e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4538927.251756459
Sharpe: 1.0239597239761906
=================================
------------------------------------
| time/ | |
| fps | 209 |
| iterations | 1100 |
| time_elapsed | 26 |
| total_timesteps | 5500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 1099 |
| policy_loss | 2.02e+08 |
| std | 0.995 |
| value_loss | 2.44e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 211 |
| iterations | 1200 |
| time_elapsed | 28 |
| total_timesteps | 6000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | -3.58e+18 |
| learning_rate | 0.0002 |
| n_updates | 1199 |
| policy_loss | 2.77e+08 |
| std | 0.995 |
| value_loss | 4.09e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 1300 |
| time_elapsed | 30 |
| total_timesteps | 6500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 1299 |
| policy_loss | 3.35e+08 |
| std | 0.994 |
| value_loss | 8.06e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 215 |
| iterations | 1400 |
| time_elapsed | 32 |
| total_timesteps | 7000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | -1.69e+20 |
| learning_rate | 0.0002 |
| n_updates | 1399 |
| policy_loss | 4.1e+08 |
| std | 0.994 |
| value_loss | 1.2e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 217 |
| iterations | 1500 |
| time_elapsed | 34 |
| total_timesteps | 7500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 1499 |
| policy_loss | 5.74e+08 |
| std | 0.994 |
| value_loss | 2.47e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4569623.286530429
Sharpe: 1.0309827263626288
=================================
-------------------------------------
| time/ | |
| fps | 217 |
| iterations | 1600 |
| time_elapsed | 36 |
| total_timesteps | 8000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | -1.11e+24 |
| learning_rate | 0.0002 |
| n_updates | 1599 |
| policy_loss | 1.81e+08 |
| std | 0.994 |
| value_loss | 2.28e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 218 |
| iterations | 1700 |
| time_elapsed | 38 |
| total_timesteps | 8500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 1699 |
| policy_loss | 2.6e+08 |
| std | 0.993 |
| value_loss | 4.54e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 216 |
| iterations | 1800 |
| time_elapsed | 41 |
| total_timesteps | 9000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 1799 |
| policy_loss | 3.57e+08 |
| std | 0.993 |
| value_loss | 9.62e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 216 |
| iterations | 1900 |
| time_elapsed | 43 |
| total_timesteps | 9500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | -6.95e+20 |
| learning_rate | 0.0002 |
| n_updates | 1899 |
| policy_loss | 4.08e+08 |
| std | 0.992 |
| value_loss | 1.33e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 216 |
| iterations | 2000 |
| time_elapsed | 46 |
| total_timesteps | 10000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 1999 |
| policy_loss | 7.22e+08 |
| std | 0.991 |
| value_loss | 3.02e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4784563.101868668
Sharpe: 1.0546332869946304
=================================
------------------------------------
| time/ | |
| fps | 216 |
| iterations | 2100 |
| time_elapsed | 48 |
| total_timesteps | 10500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2099 |
| policy_loss | 1.64e+08 |
| std | 0.991 |
| value_loss | 2.02e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 217 |
| iterations | 2200 |
| time_elapsed | 50 |
| total_timesteps | 11000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2199 |
| policy_loss | 2.31e+08 |
| std | 0.99 |
| value_loss | 3.61e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 218 |
| iterations | 2300 |
| time_elapsed | 52 |
| total_timesteps | 11500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2299 |
| policy_loss | 3.07e+08 |
| std | 0.99 |
| value_loss | 7.81e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 219 |
| iterations | 2400 |
| time_elapsed | 54 |
| total_timesteps | 12000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2399 |
| policy_loss | 4.03e+08 |
| std | 0.99 |
| value_loss | 1.05e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 220 |
| iterations | 2500 |
| time_elapsed | 56 |
| total_timesteps | 12500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2499 |
| policy_loss | 5.57e+08 |
| std | 0.99 |
| value_loss | 2.27e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4265807.380536508
Sharpe: 0.9867782700137868
=================================
-------------------------------------
| time/ | |
| fps | 219 |
| iterations | 2600 |
| time_elapsed | 59 |
| total_timesteps | 13000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | -3.35e+20 |
| learning_rate | 0.0002 |
| n_updates | 2599 |
| policy_loss | 1.62e+08 |
| std | 0.989 |
| value_loss | 1.89e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 220 |
| iterations | 2700 |
| time_elapsed | 61 |
| total_timesteps | 13500 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2699 |
| policy_loss | 2.56e+08 |
| std | 0.989 |
| value_loss | 4.37e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 221 |
| iterations | 2800 |
| time_elapsed | 63 |
| total_timesteps | 14000 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2799 |
| policy_loss | 3.57e+08 |
| std | 0.989 |
| value_loss | 9.53e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 221 |
| iterations | 2900 |
| time_elapsed | 65 |
| total_timesteps | 14500 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2899 |
| policy_loss | 4.31e+08 |
| std | 0.988 |
| value_loss | 1.42e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 222 |
| iterations | 3000 |
| time_elapsed | 67 |
| total_timesteps | 15000 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2999 |
| policy_loss | 6.16e+08 |
| std | 0.988 |
| value_loss | 2.68e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4737187.266470802
Sharpe: 1.048554781654813
=================================
------------------------------------
| time/ | |
| fps | 222 |
| iterations | 3100 |
| time_elapsed | 69 |
| total_timesteps | 15500 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3099 |
| policy_loss | 1.57e+08 |
| std | 0.988 |
| value_loss | 1.96e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 222 |
| iterations | 3200 |
| time_elapsed | 71 |
| total_timesteps | 16000 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3199 |
| policy_loss | 2.45e+08 |
| std | 0.988 |
| value_loss | 3.58e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 223 |
| iterations | 3300 |
| time_elapsed | 73 |
| total_timesteps | 16500 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3299 |
| policy_loss | 3.71e+08 |
| std | 0.987 |
| value_loss | 8.38e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 223 |
| iterations | 3400 |
| time_elapsed | 75 |
| total_timesteps | 17000 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3399 |
| policy_loss | 3.89e+08 |
| std | 0.987 |
| value_loss | 1.19e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 223 |
| iterations | 3500 |
| time_elapsed | 78 |
| total_timesteps | 17500 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3499 |
| policy_loss | 5.47e+08 |
| std | 0.987 |
| value_loss | 2.32e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4594345.465329124
Sharpe: 1.0338662249918555
=================================
-------------------------------------
| time/ | |
| fps | 223 |
| iterations | 3600 |
| time_elapsed | 80 |
| total_timesteps | 18000 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | -2.39e+23 |
| learning_rate | 0.0002 |
| n_updates | 3599 |
| policy_loss | 1.56e+08 |
| std | 0.987 |
| value_loss | 1.98e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 3700 |
| time_elapsed | 82 |
| total_timesteps | 18500 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3699 |
| policy_loss | 2.45e+08 |
| std | 0.986 |
| value_loss | 3.78e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 224 |
| iterations | 3800 |
| time_elapsed | 84 |
| total_timesteps | 19000 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | -1.11e+24 |
| learning_rate | 0.0002 |
| n_updates | 3799 |
| policy_loss | 3.75e+08 |
| std | 0.986 |
| value_loss | 9.09e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 3900 |
| time_elapsed | 86 |
| total_timesteps | 19500 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3899 |
| policy_loss | 4.23e+08 |
| std | 0.986 |
| value_loss | 1.09e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 225 |
| iterations | 4000 |
| time_elapsed | 88 |
| total_timesteps | 20000 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3999 |
| policy_loss | 5.46e+08 |
| std | 0.985 |
| value_loss | 2.21e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4537629.671792137
Sharpe: 1.027306122996326
=================================
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 4100 |
| time_elapsed | 91 |
| total_timesteps | 20500 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4099 |
| policy_loss | 1.76e+08 |
| std | 0.985 |
| value_loss | 1.96e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 225 |
| iterations | 4200 |
| time_elapsed | 93 |
| total_timesteps | 21000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | -4.27e+23 |
| learning_rate | 0.0002 |
| n_updates | 4199 |
| policy_loss | 2.17e+08 |
| std | 0.983 |
| value_loss | 3.5e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 225 |
| iterations | 4300 |
| time_elapsed | 95 |
| total_timesteps | 21500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | -9.61e+23 |
| learning_rate | 0.0002 |
| n_updates | 4299 |
| policy_loss | 3.36e+08 |
| std | 0.982 |
| value_loss | 7.88e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 225 |
| iterations | 4400 |
| time_elapsed | 97 |
| total_timesteps | 22000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4399 |
| policy_loss | 3.9e+08 |
| std | 0.982 |
| value_loss | 1.09e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 226 |
| iterations | 4500 |
| time_elapsed | 99 |
| total_timesteps | 22500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4499 |
| policy_loss | 5.96e+08 |
| std | 0.982 |
| value_loss | 2.24e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4641050.148925118
Sharpe: 1.035206741352005
=================================
------------------------------------
| time/ | |
| fps | 226 |
| iterations | 4600 |
| time_elapsed | 101 |
| total_timesteps | 23000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4599 |
| policy_loss | 1.86e+08 |
| std | 0.981 |
| value_loss | 2.04e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 226 |
| iterations | 4700 |
| time_elapsed | 103 |
| total_timesteps | 23500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4699 |
| policy_loss | 2.4e+08 |
| std | 0.981 |
| value_loss | 4.09e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 226 |
| iterations | 4800 |
| time_elapsed | 105 |
| total_timesteps | 24000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4799 |
| policy_loss | 3.69e+08 |
| std | 0.981 |
| value_loss | 9.69e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 226 |
| iterations | 4900 |
| time_elapsed | 108 |
| total_timesteps | 24500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | -5.9e+21 |
| learning_rate | 0.0002 |
| n_updates | 4899 |
| policy_loss | 4.46e+08 |
| std | 0.98 |
| value_loss | 1.36e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 226 |
| iterations | 5000 |
| time_elapsed | 110 |
| total_timesteps | 25000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4999 |
| policy_loss | 6.05e+08 |
| std | 0.98 |
| value_loss | 2.56e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5080677.099515816
Sharpe: 1.0970818985375046
=================================
------------------------------------
| time/ | |
| fps | 225 |
| iterations | 5100 |
| time_elapsed | 113 |
| total_timesteps | 25500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5099 |
| policy_loss | 1.7e+08 |
| std | 0.98 |
| value_loss | 2.24e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5200 |
| time_elapsed | 115 |
| total_timesteps | 26000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5199 |
| policy_loss | 2.39e+08 |
| std | 0.98 |
| value_loss | 3.92e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5300 |
| time_elapsed | 117 |
| total_timesteps | 26500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5299 |
| policy_loss | 3.24e+08 |
| std | 0.98 |
| value_loss | 8.04e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5400 |
| time_elapsed | 120 |
| total_timesteps | 27000 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | -4.8e+21 |
| learning_rate | 0.0002 |
| n_updates | 5399 |
| policy_loss | 4.29e+08 |
| std | 0.979 |
| value_loss | 1.22e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5500 |
| time_elapsed | 122 |
| total_timesteps | 27500 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5499 |
| policy_loss | 5.4e+08 |
| std | 0.979 |
| value_loss | 2.31e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4811657.503165074
Sharpe: 1.0589276474603557
=================================
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5600 |
| time_elapsed | 124 |
| total_timesteps | 28000 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5599 |
| policy_loss | 1.71e+08 |
| std | 0.978 |
| value_loss | 2.12e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5700 |
| time_elapsed | 126 |
| total_timesteps | 28500 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5699 |
| policy_loss | 2.15e+08 |
| std | 0.978 |
| value_loss | 3.76e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5800 |
| time_elapsed | 129 |
| total_timesteps | 29000 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5799 |
| policy_loss | 3.25e+08 |
| std | 0.978 |
| value_loss | 7.21e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5900 |
| time_elapsed | 131 |
| total_timesteps | 29500 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5899 |
| policy_loss | 3.48e+08 |
| std | 0.977 |
| value_loss | 9.82e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 225 |
| iterations | 6000 |
| time_elapsed | 133 |
| total_timesteps | 30000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5999 |
| policy_loss | 5.64e+08 |
| std | 0.976 |
| value_loss | 2.13e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4485060.270775738
Sharpe: 1.01141473877631
=================================
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6100 |
| time_elapsed | 135 |
| total_timesteps | 30500 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6099 |
| policy_loss | 1.76e+08 |
| std | 0.976 |
| value_loss | 2.21e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6200 |
| time_elapsed | 137 |
| total_timesteps | 31000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6199 |
| policy_loss | 2.37e+08 |
| std | 0.976 |
| value_loss | 3.86e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6300 |
| time_elapsed | 140 |
| total_timesteps | 31500 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6299 |
| policy_loss | 3.28e+08 |
| std | 0.975 |
| value_loss | 7.7e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6400 |
| time_elapsed | 142 |
| total_timesteps | 32000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6399 |
| policy_loss | 4.03e+08 |
| std | 0.975 |
| value_loss | 1.03e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6500 |
| time_elapsed | 144 |
| total_timesteps | 32500 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6499 |
| policy_loss | 5.93e+08 |
| std | 0.975 |
| value_loss | 2.38e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4716704.9549536165
Sharpe: 1.0510500905659037
=================================
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6600 |
| time_elapsed | 147 |
| total_timesteps | 33000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6599 |
| policy_loss | 1.78e+08 |
| std | 0.975 |
| value_loss | 2.04e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6700 |
| time_elapsed | 149 |
| total_timesteps | 33500 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6699 |
| policy_loss | 2.4e+08 |
| std | 0.974 |
| value_loss | 3.85e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6800 |
| time_elapsed | 151 |
| total_timesteps | 34000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | -1.16e+24 |
| learning_rate | 0.0002 |
| n_updates | 6799 |
| policy_loss | 3.2e+08 |
| std | 0.974 |
| value_loss | 7.66e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6900 |
| time_elapsed | 153 |
| total_timesteps | 34500 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6899 |
| policy_loss | 3.45e+08 |
| std | 0.973 |
| value_loss | 9.59e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 7000 |
| time_elapsed | 155 |
| total_timesteps | 35000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6999 |
| policy_loss | 6.22e+08 |
| std | 0.973 |
| value_loss | 2.58e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4722061.646242311
Sharpe: 1.0529486633467167
=================================
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 7100 |
| time_elapsed | 158 |
| total_timesteps | 35500 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7099 |
| policy_loss | 1.63e+08 |
| std | 0.973 |
| value_loss | 1.91e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 7200 |
| time_elapsed | 160 |
| total_timesteps | 36000 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7199 |
| policy_loss | 2.26e+08 |
| std | 0.973 |
| value_loss | 3.43e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 7300 |
| time_elapsed | 162 |
| total_timesteps | 36500 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7299 |
| policy_loss | 3.31e+08 |
| std | 0.972 |
| value_loss | 7.69e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 223 |
| iterations | 7400 |
| time_elapsed | 165 |
| total_timesteps | 37000 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7399 |
| policy_loss | 3.65e+08 |
| std | 0.971 |
| value_loss | 9.37e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 222 |
| iterations | 7500 |
| time_elapsed | 168 |
| total_timesteps | 37500 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7499 |
| policy_loss | 5.72e+08 |
| std | 0.971 |
| value_loss | 2.37e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4651172.332054012
Sharpe: 1.0366825368944979
=================================
------------------------------------
| time/ | |
| fps | 221 |
| iterations | 7600 |
| time_elapsed | 171 |
| total_timesteps | 38000 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7599 |
| policy_loss | 1.71e+08 |
| std | 0.971 |
| value_loss | 2e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 220 |
| iterations | 7700 |
| time_elapsed | 174 |
| total_timesteps | 38500 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7699 |
| policy_loss | 2e+08 |
| std | 0.97 |
| value_loss | 3.27e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 219 |
| iterations | 7800 |
| time_elapsed | 177 |
| total_timesteps | 39000 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | -2.5e+23 |
| learning_rate | 0.0002 |
| n_updates | 7799 |
| policy_loss | 3.23e+08 |
| std | 0.969 |
| value_loss | 8.21e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 218 |
| iterations | 7900 |
| time_elapsed | 181 |
| total_timesteps | 39500 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | -3.76e+23 |
| learning_rate | 0.0002 |
| n_updates | 7899 |
| policy_loss | 4.25e+08 |
| std | 0.969 |
| value_loss | 1.23e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 216 |
| iterations | 8000 |
| time_elapsed | 184 |
| total_timesteps | 40000 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7999 |
| policy_loss | 5.93e+08 |
| std | 0.969 |
| value_loss | 2.54e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5004208.576042484
Sharpe: 1.0844189746438444
=================================
------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8100 |
| time_elapsed | 187 |
| total_timesteps | 40500 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 8099 |
| policy_loss | 1.66e+08 |
| std | 0.969 |
| value_loss | 2e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8200 |
| time_elapsed | 189 |
| total_timesteps | 41000 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | -9.41e+22 |
| learning_rate | 0.0002 |
| n_updates | 8199 |
| policy_loss | 2.17e+08 |
| std | 0.969 |
| value_loss | 3.1e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8300 |
| time_elapsed | 192 |
| total_timesteps | 41500 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | -2.31e+23 |
| learning_rate | 0.0002 |
| n_updates | 8299 |
| policy_loss | 3.37e+08 |
| std | 0.968 |
| value_loss | 7.5e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8400 |
| time_elapsed | 194 |
| total_timesteps | 42000 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 8399 |
| policy_loss | 3.99e+08 |
| std | 0.967 |
| value_loss | 1.15e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8500 |
| time_elapsed | 197 |
| total_timesteps | 42500 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 8499 |
| policy_loss | 5.83e+08 |
| std | 0.967 |
| value_loss | 2.03e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4690651.093610478
Sharpe: 1.0439707122222264
=================================
-------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8600 |
| time_elapsed | 199 |
| total_timesteps | 43000 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | -1.44e+21 |
| learning_rate | 0.0002 |
| n_updates | 8599 |
| policy_loss | 1.58e+08 |
| std | 0.967 |
| value_loss | 1.95e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8700 |
| time_elapsed | 202 |
| total_timesteps | 43500 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 8699 |
| policy_loss | 2.11e+08 |
| std | 0.966 |
| value_loss | 3.08e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8800 |
| time_elapsed | 204 |
| total_timesteps | 44000 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 8799 |
| policy_loss | 3.28e+08 |
| std | 0.965 |
| value_loss | 7.03e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 214 |
| iterations | 8900 |
| time_elapsed | 207 |
| total_timesteps | 44500 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | -3.36e+23 |
| learning_rate | 0.0002 |
| n_updates | 8899 |
| policy_loss | 4.06e+08 |
| std | 0.965 |
| value_loss | 1.1e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9000 |
| time_elapsed | 210 |
| total_timesteps | 45000 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 8999 |
| policy_loss | 5.2e+08 |
| std | 0.964 |
| value_loss | 1.98e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4660061.433540329
Sharpe: 1.04048695684595
=================================
-------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9100 |
| time_elapsed | 213 |
| total_timesteps | 45500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | -1.77e+21 |
| learning_rate | 0.0002 |
| n_updates | 9099 |
| policy_loss | 1.62e+08 |
| std | 0.964 |
| value_loss | 1.83e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9200 |
| time_elapsed | 215 |
| total_timesteps | 46000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9199 |
| policy_loss | 2.01e+08 |
| std | 0.964 |
| value_loss | 2.87e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9300 |
| time_elapsed | 217 |
| total_timesteps | 46500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | -2.13e+23 |
| learning_rate | 0.0002 |
| n_updates | 9299 |
| policy_loss | 3.31e+08 |
| std | 0.963 |
| value_loss | 7e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9400 |
| time_elapsed | 220 |
| total_timesteps | 47000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9399 |
| policy_loss | 4.06e+08 |
| std | 0.963 |
| value_loss | 1.1e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9500 |
| time_elapsed | 222 |
| total_timesteps | 47500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9499 |
| policy_loss | 5.33e+08 |
| std | 0.962 |
| value_loss | 2.11e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4841177.689704771
Sharpe: 1.0662304642107994
=================================
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9600 |
| time_elapsed | 224 |
| total_timesteps | 48000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9599 |
| policy_loss | 1.42e+08 |
| std | 0.962 |
| value_loss | 1.54e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9700 |
| time_elapsed | 226 |
| total_timesteps | 48500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9699 |
| policy_loss | 1.72e+08 |
| std | 0.961 |
| value_loss | 2.54e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9800 |
| time_elapsed | 229 |
| total_timesteps | 49000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9799 |
| policy_loss | 3.05e+08 |
| std | 0.961 |
| value_loss | 6.27e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9900 |
| time_elapsed | 232 |
| total_timesteps | 49500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9899 |
| policy_loss | 3.52e+08 |
| std | 0.962 |
| value_loss | 9.87e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10000 |
| time_elapsed | 234 |
| total_timesteps | 50000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9999 |
| policy_loss | 4.99e+08 |
| std | 0.962 |
| value_loss | 1.98e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4829593.807900699
Sharpe: 1.0662441117803074
=================================
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10100 |
| time_elapsed | 237 |
| total_timesteps | 50500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10099 |
| policy_loss | 1.41e+08 |
| std | 0.962 |
| value_loss | 1.59e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10200 |
| time_elapsed | 239 |
| total_timesteps | 51000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10199 |
| policy_loss | 1.88e+08 |
| std | 0.961 |
| value_loss | 2.59e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10300 |
| time_elapsed | 242 |
| total_timesteps | 51500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10299 |
| policy_loss | 3.11e+08 |
| std | 0.961 |
| value_loss | 5.9e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10400 |
| time_elapsed | 244 |
| total_timesteps | 52000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10399 |
| policy_loss | 3.57e+08 |
| std | 0.961 |
| value_loss | 9.64e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10500 |
| time_elapsed | 246 |
| total_timesteps | 52500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10499 |
| policy_loss | 4.69e+08 |
| std | 0.961 |
| value_loss | 1.89e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4867642.492651795
Sharpe: 1.0695800575241914
=================================
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10600 |
| time_elapsed | 249 |
| total_timesteps | 53000 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10599 |
| policy_loss | 1.44e+08 |
| std | 0.96 |
| value_loss | 1.48e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10700 |
| time_elapsed | 251 |
| total_timesteps | 53500 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10699 |
| policy_loss | 1.9e+08 |
| std | 0.96 |
| value_loss | 2.62e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10800 |
| time_elapsed | 253 |
| total_timesteps | 54000 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10799 |
| policy_loss | 3.1e+08 |
| std | 0.959 |
| value_loss | 6.5e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10900 |
| time_elapsed | 256 |
| total_timesteps | 54500 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10899 |
| policy_loss | 3.56e+08 |
| std | 0.959 |
| value_loss | 1.09e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 11000 |
| time_elapsed | 258 |
| total_timesteps | 55000 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10999 |
| policy_loss | 4.86e+08 |
| std | 0.958 |
| value_loss | 1.8e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4722117.849533835
Sharpe: 1.0511916286251552
=================================
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 11100 |
| time_elapsed | 261 |
| total_timesteps | 55500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11099 |
| policy_loss | 1.37e+08 |
| std | 0.957 |
| value_loss | 1.42e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 11200 |
| time_elapsed | 263 |
| total_timesteps | 56000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11199 |
| policy_loss | 2.17e+08 |
| std | 0.956 |
| value_loss | 3.5e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 11300 |
| time_elapsed | 265 |
| total_timesteps | 56500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11299 |
| policy_loss | 3.17e+08 |
| std | 0.957 |
| value_loss | 7.01e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 11400 |
| time_elapsed | 268 |
| total_timesteps | 57000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11399 |
| policy_loss | 3.67e+08 |
| std | 0.956 |
| value_loss | 1.15e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 211 |
| iterations | 11500 |
| time_elapsed | 271 |
| total_timesteps | 57500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11499 |
| policy_loss | 5.1e+08 |
| std | 0.956 |
| value_loss | 1.78e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4803878.457147342
Sharpe: 1.0585455233591723
=================================
------------------------------------
| time/ | |
| fps | 211 |
| iterations | 11600 |
| time_elapsed | 274 |
| total_timesteps | 58000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11599 |
| policy_loss | 1.22e+08 |
| std | 0.956 |
| value_loss | 1.16e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 211 |
| iterations | 11700 |
| time_elapsed | 276 |
| total_timesteps | 58500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11699 |
| policy_loss | 2.17e+08 |
| std | 0.956 |
| value_loss | 3.15e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 211 |
| iterations | 11800 |
| time_elapsed | 279 |
| total_timesteps | 59000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11799 |
| policy_loss | 3.13e+08 |
| std | 0.956 |
| value_loss | 6.62e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 211 |
| iterations | 11900 |
| time_elapsed | 281 |
| total_timesteps | 59500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11899 |
| policy_loss | 4.11e+08 |
| std | 0.956 |
| value_loss | 1.2e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 211 |
| iterations | 12000 |
| time_elapsed | 283 |
| total_timesteps | 60000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11999 |
| policy_loss | 5.16e+08 |
| std | 0.956 |
| value_loss | 1.93e+14 |
------------------------------------
###Markdown
Model 2: **PPO**
###Code
agent = DRLAgent(env = env_train)
PPO_PARAMS = {
"n_steps": 2048,
"ent_coef": 0.005,
"learning_rate": 0.0001,
"batch_size": 128,
}
model_ppo = agent.get_model("ppo",model_kwargs = PPO_PARAMS)
trained_ppo = agent.train_model(model=model_ppo,
tb_log_name='ppo',
total_timesteps=80000)
###Output
Logging to tensorboard_log/ppo/ppo_3
-----------------------------
| time/ | |
| fps | 458 |
| iterations | 1 |
| time_elapsed | 4 |
| total_timesteps | 2048 |
-----------------------------
=================================
begin_total_asset:1000000
end_total_asset:4917364.6278486075
Sharpe: 1.074414829116363
=================================
--------------------------------------------
| time/ | |
| fps | 391 |
| iterations | 2 |
| time_elapsed | 10 |
| total_timesteps | 4096 |
| train/ | |
| approx_kl | -7.8231096e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.71e+14 |
| learning_rate | 0.0001 |
| loss | 7.78e+14 |
| n_updates | 10 |
| policy_gradient_loss | -6.16e-07 |
| std | 1 |
| value_loss | 1.57e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4996331.100586685
Sharpe: 1.0890927964884638
=================================
--------------------------------------------
| time/ | |
| fps | 373 |
| iterations | 3 |
| time_elapsed | 16 |
| total_timesteps | 6144 |
| train/ | |
| approx_kl | -3.5390258e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -8.76e+14 |
| learning_rate | 0.0001 |
| loss | 1.1e+15 |
| n_updates | 20 |
| policy_gradient_loss | -4.29e-07 |
| std | 1 |
| value_loss | 2.33e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4751039.2878817525
Sharpe: 1.0560179406423764
=================================
--------------------------------------------
| time/ | |
| fps | 365 |
| iterations | 4 |
| time_elapsed | 22 |
| total_timesteps | 8192 |
| train/ | |
| approx_kl | -1.6763806e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -8.01e+15 |
| learning_rate | 0.0001 |
| loss | 1.25e+15 |
| n_updates | 30 |
| policy_gradient_loss | -5.58e-07 |
| std | 1 |
| value_loss | 2.59e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4769059.347696523
Sharpe: 1.056814654380227
=================================
--------------------------------------------
| time/ | |
| fps | 360 |
| iterations | 5 |
| time_elapsed | 28 |
| total_timesteps | 10240 |
| train/ | |
| approx_kl | -5.5879354e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -2.55e+16 |
| learning_rate | 0.0001 |
| loss | 1.24e+15 |
| n_updates | 40 |
| policy_gradient_loss | -4.9e-07 |
| std | 1 |
| value_loss | 2.7e+15 |
--------------------------------------------
--------------------------------------------
| time/ | |
| fps | 358 |
| iterations | 6 |
| time_elapsed | 34 |
| total_timesteps | 12288 |
| train/ | |
| approx_kl | 1.13621354e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -9.17e+16 |
| learning_rate | 0.0001 |
| loss | 1.35e+15 |
| n_updates | 50 |
| policy_gradient_loss | -4.28e-07 |
| std | 1 |
| value_loss | 2.77e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4816491.86007194
Sharpe: 1.0636199939613733
=================================
-------------------------------------------
| time/ | |
| fps | 356 |
| iterations | 7 |
| time_elapsed | 40 |
| total_timesteps | 14336 |
| train/ | |
| approx_kl | 3.5390258e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.42e+17 |
| learning_rate | 0.0001 |
| loss | 1.03e+15 |
| n_updates | 60 |
| policy_gradient_loss | -6.52e-07 |
| std | 1 |
| value_loss | 1.94e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4631919.83090099
Sharpe: 1.0396504731290799
=================================
-------------------------------------------
| time/ | |
| fps | 354 |
| iterations | 8 |
| time_elapsed | 46 |
| total_timesteps | 16384 |
| train/ | |
| approx_kl | 1.7508864e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.93e+17 |
| learning_rate | 0.0001 |
| loss | 9.83e+14 |
| n_updates | 70 |
| policy_gradient_loss | -5.78e-07 |
| std | 1 |
| value_loss | 2.06e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4728763.286321457
Sharpe: 1.052390302374202
=================================
-------------------------------------------
| time/ | |
| fps | 353 |
| iterations | 9 |
| time_elapsed | 52 |
| total_timesteps | 18432 |
| train/ | |
| approx_kl | 4.4703484e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.72e+18 |
| learning_rate | 0.0001 |
| loss | 1.25e+15 |
| n_updates | 80 |
| policy_gradient_loss | -4.84e-07 |
| std | 1 |
| value_loss | 2.33e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4439983.024798136
Sharpe: 1.013829383303325
=================================
--------------------------------------------
| time/ | |
| fps | 352 |
| iterations | 10 |
| time_elapsed | 58 |
| total_timesteps | 20480 |
| train/ | |
| approx_kl | -1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.7e+18 |
| learning_rate | 0.0001 |
| loss | 1.17e+15 |
| n_updates | 90 |
| policy_gradient_loss | -4.82e-07 |
| std | 1 |
| value_loss | 2.58e+15 |
--------------------------------------------
-------------------------------------------
| time/ | |
| fps | 352 |
| iterations | 11 |
| time_elapsed | 63 |
| total_timesteps | 22528 |
| train/ | |
| approx_kl | -9.313226e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.85e+18 |
| learning_rate | 0.0001 |
| loss | 1.2e+15 |
| n_updates | 100 |
| policy_gradient_loss | -5.2e-07 |
| std | 1 |
| value_loss | 2.51e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5048884.524536961
Sharpe: 1.0963911876706685
=================================
-------------------------------------------
| time/ | |
| fps | 351 |
| iterations | 12 |
| time_elapsed | 69 |
| total_timesteps | 24576 |
| train/ | |
| approx_kl | 3.7252903e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -2.67e+18 |
| learning_rate | 0.0001 |
| loss | 1.44e+15 |
| n_updates | 110 |
| policy_gradient_loss | -4.53e-07 |
| std | 1 |
| value_loss | 2.8e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4824229.456193555
Sharpe: 1.0648549464252506
=================================
-------------------------------------------
| time/ | |
| fps | 351 |
| iterations | 13 |
| time_elapsed | 75 |
| total_timesteps | 26624 |
| train/ | |
| approx_kl | 3.3527613e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.38e+18 |
| learning_rate | 0.0001 |
| loss | 7.89e+14 |
| n_updates | 120 |
| policy_gradient_loss | -6.06e-07 |
| std | 1 |
| value_loss | 1.76e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4602974.615591427
Sharpe: 1.034753433280377
=================================
-------------------------------------------
| time/ | |
| fps | 350 |
| iterations | 14 |
| time_elapsed | 81 |
| total_timesteps | 28672 |
| train/ | |
| approx_kl | 8.8475645e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.75e+19 |
| learning_rate | 0.0001 |
| loss | 1.23e+15 |
| n_updates | 130 |
| policy_gradient_loss | -5.8e-07 |
| std | 1 |
| value_loss | 2.27e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4608422.583401322
Sharpe: 1.035300880612428
=================================
-------------------------------------------
| time/ | |
| fps | 349 |
| iterations | 15 |
| time_elapsed | 87 |
| total_timesteps | 30720 |
| train/ | |
| approx_kl | 1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.71e+18 |
| learning_rate | 0.0001 |
| loss | 1.22e+15 |
| n_updates | 140 |
| policy_gradient_loss | -5.63e-07 |
| std | 1 |
| value_loss | 2.39e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4826869.636472441
Sharpe: 1.0676330284861433
=================================
--------------------------------------------
| time/ | |
| fps | 348 |
| iterations | 16 |
| time_elapsed | 94 |
| total_timesteps | 32768 |
| train/ | |
| approx_kl | -1.4901161e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.51e+19 |
| learning_rate | 0.0001 |
| loss | 1.22e+15 |
| n_updates | 150 |
| policy_gradient_loss | -5.78e-07 |
| std | 1 |
| value_loss | 2.7e+15 |
--------------------------------------------
-------------------------------------------
| time/ | |
| fps | 346 |
| iterations | 17 |
| time_elapsed | 100 |
| total_timesteps | 34816 |
| train/ | |
| approx_kl | -5.401671e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.48e+19 |
| learning_rate | 0.0001 |
| loss | 1.48e+15 |
| n_updates | 160 |
| policy_gradient_loss | -3.96e-07 |
| std | 1 |
| value_loss | 2.81e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4364006.929301854
Sharpe: 1.002176631256902
=================================
--------------------------------------------
| time/ | |
| fps | 345 |
| iterations | 18 |
| time_elapsed | 106 |
| total_timesteps | 36864 |
| train/ | |
| approx_kl | -1.0803342e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.15e+19 |
| learning_rate | 0.0001 |
| loss | 8.41e+14 |
| n_updates | 170 |
| policy_gradient_loss | -4.91e-07 |
| std | 1 |
| value_loss | 1.58e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4796634.5596691
Sharpe: 1.0678319491053092
=================================
--------------------------------------------
| time/ | |
| fps | 344 |
| iterations | 19 |
| time_elapsed | 112 |
| total_timesteps | 38912 |
| train/ | |
| approx_kl | -1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -4.21e+19 |
| learning_rate | 0.0001 |
| loss | 1.03e+15 |
| n_updates | 180 |
| policy_gradient_loss | -5.6e-07 |
| std | 1 |
| value_loss | 2.02e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4969786.413399254
Sharpe: 1.0823021486710163
=================================
--------------------------------------------
| time/ | |
| fps | 344 |
| iterations | 20 |
| time_elapsed | 118 |
| total_timesteps | 40960 |
| train/ | |
| approx_kl | -6.7055225e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.41e+19 |
| learning_rate | 0.0001 |
| loss | 1.22e+15 |
| n_updates | 190 |
| policy_gradient_loss | -2.87e-07 |
| std | 1 |
| value_loss | 2.4e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4885480.801922398
Sharpe: 1.0729451877791811
=================================
--------------------------------------------
| time/ | |
| fps | 343 |
| iterations | 21 |
| time_elapsed | 125 |
| total_timesteps | 43008 |
| train/ | |
| approx_kl | -5.5879354e-09 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.85e+19 |
| learning_rate | 0.0001 |
| loss | 1.62e+15 |
| n_updates | 200 |
| policy_gradient_loss | -5.24e-07 |
| std | 1 |
| value_loss | 2.95e+15 |
--------------------------------------------
-------------------------------------------
| time/ | |
| fps | 343 |
| iterations | 22 |
| time_elapsed | 131 |
| total_timesteps | 45056 |
| train/ | |
| approx_kl | 1.8067658e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.01e+19 |
| learning_rate | 0.0001 |
| loss | 1.34e+15 |
| n_updates | 210 |
| policy_gradient_loss | -4.62e-07 |
| std | 1 |
| value_loss | 2.93e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5613709.009268909
Sharpe: 1.1673870008513114
=================================
--------------------------------------------
| time/ | |
| fps | 342 |
| iterations | 23 |
| time_elapsed | 137 |
| total_timesteps | 47104 |
| train/ | |
| approx_kl | -2.0489097e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.72e+19 |
| learning_rate | 0.0001 |
| loss | 1.41e+15 |
| n_updates | 220 |
| policy_gradient_loss | -4.78e-07 |
| std | 1 |
| value_loss | 2.71e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5043800.590470289
Sharpe: 1.0953673306850924
=================================
-------------------------------------------
| time/ | |
| fps | 342 |
| iterations | 24 |
| time_elapsed | 143 |
| total_timesteps | 49152 |
| train/ | |
| approx_kl | 2.4214387e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.37e+20 |
| learning_rate | 0.0001 |
| loss | 1.01e+15 |
| n_updates | 230 |
| policy_gradient_loss | -5.28e-07 |
| std | 1 |
| value_loss | 2.26e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4776576.852863929
Sharpe: 1.0593811754233755
=================================
-------------------------------------------
| time/ | |
| fps | 342 |
| iterations | 25 |
| time_elapsed | 149 |
| total_timesteps | 51200 |
| train/ | |
| approx_kl | 4.4703484e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.27e+20 |
| learning_rate | 0.0001 |
| loss | 1.21e+15 |
| n_updates | 240 |
| policy_gradient_loss | -4.82e-07 |
| std | 1 |
| value_loss | 2.46e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4468393.200157898
Sharpe: 1.0192746589767419
=================================
-------------------------------------------
| time/ | |
| fps | 341 |
| iterations | 26 |
| time_elapsed | 156 |
| total_timesteps | 53248 |
| train/ | |
| approx_kl | 2.6077032e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.96e+20 |
| learning_rate | 0.0001 |
| loss | 1.31e+15 |
| n_updates | 250 |
| policy_gradient_loss | -5.36e-07 |
| std | 1 |
| value_loss | 2.59e+15 |
-------------------------------------------
--------------------------------------------
| time/ | |
| fps | 341 |
| iterations | 27 |
| time_elapsed | 162 |
| total_timesteps | 55296 |
| train/ | |
| approx_kl | -1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.68e+20 |
| learning_rate | 0.0001 |
| loss | 1.33e+15 |
| n_updates | 260 |
| policy_gradient_loss | -3.77e-07 |
| std | 1 |
| value_loss | 2.51e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4875234.39450474
Sharpe: 1.0721137742534572
=================================
--------------------------------------------
| time/ | |
| fps | 340 |
| iterations | 28 |
| time_elapsed | 168 |
| total_timesteps | 57344 |
| train/ | |
| approx_kl | -1.2479722e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.66e+20 |
| learning_rate | 0.0001 |
| loss | 1.59e+15 |
| n_updates | 270 |
| policy_gradient_loss | -4.61e-07 |
| std | 1 |
| value_loss | 2.8e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4600459.210918712
Sharpe: 1.034756153745345
=================================
-------------------------------------------
| time/ | |
| fps | 340 |
| iterations | 29 |
| time_elapsed | 174 |
| total_timesteps | 59392 |
| train/ | |
| approx_kl | -4.284084e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.26e+20 |
| learning_rate | 0.0001 |
| loss | 8.07e+14 |
| n_updates | 280 |
| policy_gradient_loss | -5.44e-07 |
| std | 1 |
| value_loss | 1.62e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4526188.381438201
Sharpe: 1.0293846869900876
=================================
--------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 30 |
| time_elapsed | 180 |
| total_timesteps | 61440 |
| train/ | |
| approx_kl | -2.4214387e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.44e+20 |
| learning_rate | 0.0001 |
| loss | 1.12e+15 |
| n_updates | 290 |
| policy_gradient_loss | -5.65e-07 |
| std | 1 |
| value_loss | 2.1e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4487836.803716703
Sharpe: 1.010974660894394
=================================
--------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 31 |
| time_elapsed | 187 |
| total_timesteps | 63488 |
| train/ | |
| approx_kl | -2.6077032e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -4.47e+20 |
| learning_rate | 0.0001 |
| loss | 1.14e+15 |
| n_updates | 300 |
| policy_gradient_loss | -4.8e-07 |
| std | 1 |
| value_loss | 2.25e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4480729.650671386
Sharpe: 1.0219085518652522
=================================
--------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 32 |
| time_elapsed | 193 |
| total_timesteps | 65536 |
| train/ | |
| approx_kl | -2.0302832e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.87e+20 |
| learning_rate | 0.0001 |
| loss | 1.28e+15 |
| n_updates | 310 |
| policy_gradient_loss | -4.4e-07 |
| std | 1 |
| value_loss | 2.51e+15 |
--------------------------------------------
------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 33 |
| time_elapsed | 199 |
| total_timesteps | 67584 |
| train/ | |
| approx_kl | 1.359731e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.68e+20 |
| learning_rate | 0.0001 |
| loss | 1.24e+15 |
| n_updates | 320 |
| policy_gradient_loss | -4.51e-07 |
| std | 1 |
| value_loss | 2.66e+15 |
------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4399373.734699048
Sharpe: 1.005407087483561
=================================
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 34 |
| time_elapsed | 205 |
| total_timesteps | 69632 |
| train/ | |
| approx_kl | 2.2351742e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -2.29e+20 |
| learning_rate | 0.0001 |
| loss | 8.5e+14 |
| n_updates | 330 |
| policy_gradient_loss | -5.56e-07 |
| std | 1 |
| value_loss | 1.64e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4305742.921261859
Sharpe: 0.9945061913961891
=================================
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 35 |
| time_elapsed | 211 |
| total_timesteps | 71680 |
| train/ | |
| approx_kl | 1.3411045e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.11e+20 |
| learning_rate | 0.0001 |
| loss | 7.97e+14 |
| n_updates | 340 |
| policy_gradient_loss | -6.48e-07 |
| std | 1 |
| value_loss | 1.8e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4794175.629957249
Sharpe: 1.0611635246548963
=================================
--------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 36 |
| time_elapsed | 217 |
| total_timesteps | 73728 |
| train/ | |
| approx_kl | -3.3527613e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.16e+21 |
| learning_rate | 0.0001 |
| loss | 1.07e+15 |
| n_updates | 350 |
| policy_gradient_loss | -4.82e-07 |
| std | 1 |
| value_loss | 2.06e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4467487.416264421
Sharpe: 1.021012208464475
=================================
------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 37 |
| time_elapsed | 224 |
| total_timesteps | 75776 |
| train/ | |
| approx_kl | 5.401671e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -9.89e+20 |
| learning_rate | 0.0001 |
| loss | 1.46e+15 |
| n_updates | 360 |
| policy_gradient_loss | -4.78e-07 |
| std | 1 |
| value_loss | 2.75e+15 |
------------------------------------------
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 38 |
| time_elapsed | 229 |
| total_timesteps | 77824 |
| train/ | |
| approx_kl | 1.6763806e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.64e+20 |
| learning_rate | 0.0001 |
| loss | 1.25e+15 |
| n_updates | 370 |
| policy_gradient_loss | -4.54e-07 |
| std | 1 |
| value_loss | 2.57e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4806649.219027834
Sharpe: 1.0604486398186765
=================================
------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 39 |
| time_elapsed | 236 |
| total_timesteps | 79872 |
| train/ | |
| approx_kl | 4.284084e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.96e+20 |
| learning_rate | 0.0001 |
| loss | 1.28e+15 |
| n_updates | 380 |
| policy_gradient_loss | -5.9e-07 |
| std | 1 |
| value_loss | 2.44e+15 |
------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4653147.508966551
Sharpe: 1.043189911078732
=================================
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 40 |
| time_elapsed | 242 |
| total_timesteps | 81920 |
| train/ | |
| approx_kl | 6.3329935e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.04e+21 |
| learning_rate | 0.0001 |
| loss | 1.01e+15 |
| n_updates | 390 |
| policy_gradient_loss | -5.33e-07 |
| std | 1 |
| value_loss | 1.82e+15 |
-------------------------------------------
###Markdown
Model 3: **DDPG**
###Code
agent = DRLAgent(env = env_train)
DDPG_PARAMS = {"batch_size": 128, "buffer_size": 50000, "learning_rate": 0.001}
model_ddpg = agent.get_model("ddpg",model_kwargs = DDPG_PARAMS)
trained_ddpg = agent.train_model(model=model_ddpg,
tb_log_name='ddpg',
total_timesteps=50000)
###Output
Logging to tensorboard_log/ddpg/ddpg_2
=================================
begin_total_asset:1000000
end_total_asset:4625995.900359718
Sharpe: 1.040202670783119
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 4 |
| fps | 22 |
| time_elapsed | 439 |
| total timesteps | 10064 |
| train/ | |
| actor_loss | -6.99e+07 |
| critic_loss | 7.27e+12 |
| learning_rate | 0.001 |
| n_updates | 7548 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 8 |
| fps | 20 |
| time_elapsed | 980 |
| total timesteps | 20128 |
| train/ | |
| actor_loss | -1.44e+08 |
| critic_loss | 1.81e+13 |
| learning_rate | 0.001 |
| n_updates | 17612 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 12 |
| fps | 19 |
| time_elapsed | 1542 |
| total timesteps | 30192 |
| train/ | |
| actor_loss | -1.88e+08 |
| critic_loss | 2.72e+13 |
| learning_rate | 0.001 |
| n_updates | 27676 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 16 |
| fps | 18 |
| time_elapsed | 2133 |
| total timesteps | 40256 |
| train/ | |
| actor_loss | -2.15e+08 |
| critic_loss | 3.45e+13 |
| learning_rate | 0.001 |
| n_updates | 37740 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
---------------------------------
| time/ | |
| episodes | 20 |
| fps | 17 |
| time_elapsed | 2874 |
| total timesteps | 50320 |
| train/ | |
| actor_loss | -2.3e+08 |
| critic_loss | 4.05e+13 |
| learning_rate | 0.001 |
| n_updates | 47804 |
---------------------------------
###Markdown
Model 4: **SAC**
###Code
agent = DRLAgent(env = env_train)
SAC_PARAMS = {
"batch_size": 128,
"buffer_size": 100000,
"learning_rate": 0.0003,
"learning_starts": 100,
"ent_coef": "auto_0.1",
}
model_sac = agent.get_model("sac",model_kwargs = SAC_PARAMS)
trained_sac = agent.train_model(model=model_sac,
tb_log_name='sac',
total_timesteps=50000)
###Output
Logging to tensorboard_log/sac/sac_1
=================================
begin_total_asset:1000000
end_total_asset:4449463.498168942
Sharpe: 1.01245667390232
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418643.239765096
Sharpe: 1.0135796594260282
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418644.1960784905
Sharpe: 1.0135797537524718
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418659.429680678
Sharpe: 1.013581852537709
=================================
----------------------------------
| time/ | |
| episodes | 4 |
| fps | 12 |
| time_elapsed | 783 |
| total timesteps | 10064 |
| train/ | |
| actor_loss | -8.83e+07 |
| critic_loss | 6.57e+12 |
| ent_coef | 2.24 |
| ent_coef_loss | -205 |
| learning_rate | 0.0003 |
| n_updates | 9963 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4418651.576406099
Sharpe: 1.013581224026754
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418670.948269031
Sharpe: 1.0135838030234754
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418682.278829884
Sharpe: 1.013585596968056
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418791.911955293
Sharpe: 1.0136007328171013
=================================
----------------------------------
| time/ | |
| episodes | 8 |
| fps | 12 |
| time_elapsed | 1585 |
| total timesteps | 20128 |
| train/ | |
| actor_loss | -1.51e+08 |
| critic_loss | 1.12e+13 |
| ent_coef | 41.7 |
| ent_coef_loss | -670 |
| learning_rate | 0.0003 |
| n_updates | 20027 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4418737.365107464
Sharpe: 1.0135970410224868
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418754.895735274
Sharpe: 1.0135965589029627
=================================
=================================
begin_total_asset:1000000
end_total_asset:4419325.814567342
Sharpe: 1.0136807224228588
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418142.473513333
Sharpe: 1.0135234795926031
=================================
----------------------------------
| time/ | |
| episodes | 12 |
| fps | 12 |
| time_elapsed | 2400 |
| total timesteps | 30192 |
| train/ | |
| actor_loss | -1.85e+08 |
| critic_loss | 1.87e+13 |
| ent_coef | 725 |
| ent_coef_loss | -673 |
| learning_rate | 0.0003 |
| n_updates | 30091 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4422046.188863339
Sharpe: 1.0140936726052256
=================================
=================================
begin_total_asset:1000000
end_total_asset:4424919.463828854
Sharpe: 1.014521127041106
=================================
=================================
begin_total_asset:1000000
end_total_asset:4427483.152494239
Sharpe: 1.0148626804754584
=================================
=================================
begin_total_asset:1000000
end_total_asset:4460697.650185859
Sharpe: 1.019852362102548
=================================
----------------------------------
| time/ | |
| episodes | 16 |
| fps | 12 |
| time_elapsed | 3210 |
| total timesteps | 40256 |
| train/ | |
| actor_loss | -1.93e+08 |
| critic_loss | 1.62e+13 |
| ent_coef | 1.01e+04 |
| ent_coef_loss | -238 |
| learning_rate | 0.0003 |
| n_updates | 40155 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4434035.982803257
Sharpe: 1.0161512551319891
=================================
=================================
begin_total_asset:1000000
end_total_asset:4454728.906041551
Sharpe: 1.018484863448905
=================================
=================================
begin_total_asset:1000000
end_total_asset:4475667.120269234
Sharpe: 1.0215545521682856
=================================
###Markdown
Trading
Assume that we have $1,000,000 initial capital at 2019-01-01. We use the DDPG model to trade Dow jones 30 stocks.
###Code
trade = data_split(df,'2019-01-01', '2021-01-01')
e_trade_gym = StockPortfolioEnv(df = trade, **env_kwargs)
trade.shape
df_daily_return, df_actions = DRLAgent.DRL_prediction(model=trained_a2c,
environment = e_trade_gym)
df_daily_return.head()
df_actions.head()
df_actions.to_csv('df_actions.csv')
###Output
_____no_output_____
###Markdown
Part 7: Backtest Our Strategy
Backtesting plays a key role in evaluating the performance of a trading strategy. Automated backtesting tool is preferred because it reduces the human error. We usually use the Quantopian pyfolio package to backtest our trading strategies. It is easy to use and consists of various individual plots that provide a comprehensive image of the performance of a trading strategy.
7.1 BackTestStats
pass in df_account_value, this information is stored in env class
###Code
from pyfolio import timeseries
DRL_strat = convert_daily_return_to_pyfolio_ts(df_daily_return)
perf_func = timeseries.perf_stats
perf_stats_all = perf_func( returns=DRL_strat,
factor_returns=DRL_strat,
positions=None, transactions=None, turnover_denom="AGB")
print("==============DRL Strategy Stats===========")
perf_stats_all
###Output
==============DRL Strategy Stats===========
###Markdown
7.2 BackTestPlot
###Code
import pyfolio
%matplotlib inline
baseline_df = get_baseline(
ticker='^DJI', start='2019-01-01', end='2021-01-01'
)
baseline_returns = get_daily_return(baseline_df, value_col_name="close")
with pyfolio.plotting.plotting_context(font_scale=1.1):
pyfolio.create_full_tear_sheet(returns = DRL_strat,
benchmark_rets=baseline_returns, set_context=False)
###Output
[*********************100%***********************] 1 of 1 completed
Shape of DataFrame: (505, 8)
###Markdown
Deep Reinforcement Learning for Stock Trading from Scratch: Portfolio AllocationTutorials to use OpenAI DRL to perform portfolio allocation in one Jupyter Notebook | Presented at NeurIPS 2020: Deep RL Workshop* This blog is based on our paper: FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance, presented at NeurIPS 2020: Deep RL Workshop.* Check out medium blog for detailed explanations: * Please report any issues to our Github: https://github.com/AI4Finance-LLC/FinRL-Library/issues* **Pytorch Version** Content * [1. Problem Definition](0)* [2. Getting Started - Load Python packages](1) * [2.1. Install Packages](1.1) * [2.2. Check Additional Packages](1.2) * [2.3. Import Packages](1.3) * [2.4. Create Folders](1.4)* [3. Download Data](2)* [4. Preprocess Data](3) * [4.1. Technical Indicators](3.1) * [4.2. Perform Feature Engineering](3.2)* [5.Build Environment](4) * [5.1. Training & Trade Data Split](4.1) * [5.2. User-defined Environment](4.2) * [5.3. Initialize Environment](4.3) * [6.Implement DRL Algorithms](5) * [7.Backtesting Performance](6) * [7.1. BackTestStats](6.1) * [7.2. BackTestPlot](6.2) * [7.3. Baseline Stats](6.3) * [7.3. Compare to Stock Market Index](6.4)
Part 1. Problem Definition This problem is to design an automated trading solution for single stock trading. We model the stock trading process as a Markov Decision Process (MDP). We then formulate our trading goal as a maximization problem.
The algorithm is trained using Deep Reinforcement Learning (DRL) algorithms and the components of the reinforcement learning environment are:
* Action: The action space describes the allowed actions that the agent interacts with the
environment. Normally, a ∈ A includes three actions: a ∈ {−1, 0, 1}, where −1, 0, 1 represent
selling, holding, and buying one stock. Also, an action can be carried upon multiple shares. We use
an action space {−k, ..., −1, 0, 1, ..., k}, where k denotes the number of shares. For example, "Buy
10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or −10, respectively
* Reward function: r(s, a, s′) is the incentive mechanism for an agent to learn a better action. The change of the portfolio value when action a is taken at state s and arriving at new state s', i.e., r(s, a, s′) = v′ − v, where v′ and v represent the portfolio
values at state s′ and s, respectively
* State: The state space describes the observations that the agent receives from the environment. Just as a human trader needs to analyze various information before executing a trade, so
our trading agent observes many different features to better learn in an interactive environment.
* Environment: Dow 30 consituents
The data of the single stock that we will be using for this case study is obtained from Yahoo Finance API. The data contains Open-High-Low-Close price and volume.
Part 2. Getting Started- Load Python Packages
2.1. Install all the packages through FinRL library
###Code
## install finrl library
!pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
###Output
Collecting git+https://github.com/AI4Finance-LLC/FinRL-Library.git
Cloning https://github.com/AI4Finance-LLC/FinRL-Library.git to /tmp/pip-req-build-bwdyljxc
Running command git clone -q https://github.com/AI4Finance-LLC/FinRL-Library.git /tmp/pip-req-build-bwdyljxc
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (1.19.5)
Requirement already satisfied: pandas>=1.1.5 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (1.1.5)
Collecting stockstats
Downloading https://files.pythonhosted.org/packages/32/41/d3828c5bc0a262cb3112a4024108a3b019c183fa3b3078bff34bf25abf91/stockstats-0.3.2-py2.py3-none-any.whl
Collecting yfinance
Downloading https://files.pythonhosted.org/packages/7a/e8/b9d7104d3a4bf39924799067592d9e59119fcfc900a425a12e80a3123ec8/yfinance-0.1.55.tar.gz
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (3.2.2)
Requirement already satisfied: scikit-learn>=0.21.0 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (0.22.2.post1)
Requirement already satisfied: gym>=0.17 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (0.17.3)
Collecting stable-baselines3[extra]
[?25l Downloading https://files.pythonhosted.org/packages/76/7c/ec89fd9a51c2ff640f150479069be817136c02f02349b5dd27a6e3bb8b3d/stable_baselines3-0.10.0-py3-none-any.whl (145kB)
[K |████████████████████████████████| 153kB 6.0MB/s
[?25hRequirement already satisfied: pytest in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (3.6.4)
Requirement already satisfied: setuptools>=41.4.0 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (53.0.0)
Requirement already satisfied: wheel>=0.33.6 in /usr/local/lib/python3.7/dist-packages (from finrl==0.0.3) (0.36.2)
Collecting pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2
Cloning https://github.com/quantopian/pyfolio.git to /tmp/pip-install-jk1inqx3/pyfolio
Running command git clone -q https://github.com/quantopian/pyfolio.git /tmp/pip-install-jk1inqx3/pyfolio
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=1.1.5->finrl==0.0.3) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=1.1.5->finrl==0.0.3) (2018.9)
Collecting int-date>=0.1.7
Downloading https://files.pythonhosted.org/packages/43/27/31803df15173ab341fe7548c14154b54227dfd8f630daa09a1c6e7db52f7/int_date-0.1.8-py2.py3-none-any.whl
Requirement already satisfied: requests>=2.20 in /usr/local/lib/python3.7/dist-packages (from yfinance->finrl==0.0.3) (2.23.0)
Requirement already satisfied: multitasking>=0.0.7 in /usr/local/lib/python3.7/dist-packages (from yfinance->finrl==0.0.3) (0.0.9)
Collecting lxml>=4.5.1
[?25l Downloading https://files.pythonhosted.org/packages/d2/88/b25778f17e5320c1c58f8c5060fb5b037288e162bd7554c30799e9ea90db/lxml-4.6.2-cp37-cp37m-manylinux1_x86_64.whl (5.5MB)
[K |████████████████████████████████| 5.5MB 8.8MB/s
[?25hRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.0.3) (2.4.7)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.0.3) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.0.3) (1.3.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.0->finrl==0.0.3) (1.0.1)
Requirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.0->finrl==0.0.3) (1.4.1)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym>=0.17->finrl==0.0.3) (1.5.0)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym>=0.17->finrl==0.0.3) (1.3.0)
Requirement already satisfied: torch>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (1.7.0+cu101)
Requirement already satisfied: tensorboard; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (2.4.1)
Requirement already satisfied: psutil; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (5.4.8)
Requirement already satisfied: opencv-python; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (4.1.2.30)
Requirement already satisfied: pillow; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (7.0.0)
Requirement already satisfied: atari-py~=0.2.0; extra == "extra" in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.0.3) (0.2.6)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (1.15.0)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (20.3.0)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (8.7.0)
Requirement already satisfied: pluggy<0.8,>=0.5 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (0.7.1)
Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (1.10.0)
Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.0.3) (1.4.0)
Requirement already satisfied: ipython>=3.2.3 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (5.5.0)
Requirement already satisfied: seaborn>=0.7.1 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.11.1)
Collecting empyrical>=0.5.0
[?25l Downloading https://files.pythonhosted.org/packages/74/43/1b997c21411c6ab7c96dc034e160198272c7a785aeea7654c9bcf98bec83/empyrical-0.5.5.tar.gz (52kB)
[K |████████████████████████████████| 61kB 6.1MB/s
[?25hRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.0.3) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.0.3) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.0.3) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.20->yfinance->finrl==0.0.3) (2020.12.5)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym>=0.17->finrl==0.0.3) (0.16.0)
Requirement already satisfied: dataclasses in /usr/local/lib/python3.7/dist-packages (from torch>=1.4.0->stable-baselines3[extra]->finrl==0.0.3) (0.6)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch>=1.4.0->stable-baselines3[extra]->finrl==0.0.3) (3.7.4.3)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.12.4)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.32.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (0.4.2)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.3.3)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (0.10.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.8.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.0.1)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.27.0)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.7.5)
Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (4.8.0)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (1.0.18)
Requirement already satisfied: pygments in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (2.6.1)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (4.3.3)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.8.1)
Requirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (4.4.2)
Requirement already satisfied: pandas-datareader>=0.2 in /usr/local/lib/python3.7/dist-packages (from empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.9.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (1.3.0)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.4.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (0.2.8)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3.6" in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (4.7.1)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (4.2.1)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.7/dist-packages (from pexpect; sys_platform != "win32"->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.7.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.2.5)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.7/dist-packages (from traitlets>=4.2->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.3) (0.2.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (3.4.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard; extra == "extra"->stable-baselines3[extra]->finrl==0.0.3) (0.4.8)
Building wheels for collected packages: finrl, yfinance, pyfolio, empyrical
Building wheel for finrl (setup.py) ... [?25l[?25hdone
Created wheel for finrl: filename=finrl-0.0.3-cp37-none-any.whl size=38201 sha256=680913f069c396f38e0c508600b450102190f08e0b0bba53c58c334981ccbe6c
Stored in directory: /tmp/pip-ephem-wheel-cache-a1bbwmjm/wheels/9c/19/bf/c644def96612df1ad42c94d5304966797eaa3221dffc5efe0b
Building wheel for yfinance (setup.py) ... [?25l[?25hdone
Created wheel for yfinance: filename=yfinance-0.1.55-py2.py3-none-any.whl size=22616 sha256=2a578f51d56d3d8fff23683c041d6815f487abf3c6c97d4739d122055a6599b3
Stored in directory: /root/.cache/pip/wheels/04/98/cc/2702a4242d60bdc14f48b4557c427ded1fe92aedf257d4565c
Building wheel for pyfolio (setup.py) ... [?25l[?25hdone
Created wheel for pyfolio: filename=pyfolio-0.9.2+75.g4b901f6-cp37-none-any.whl size=75764 sha256=7e1ceb3360e57235c3d97bdbb36969c8ac05da709aa781413f1eca9088669323
Stored in directory: /tmp/pip-ephem-wheel-cache-a1bbwmjm/wheels/43/ce/d9/6752fb6e03205408773235435205a0519d2c608a94f1976e56
Building wheel for empyrical (setup.py) ... [?25l[?25hdone
Created wheel for empyrical: filename=empyrical-0.5.5-cp37-none-any.whl size=39764 sha256=6b772c8c03b900a08799fdd831ee627277cc2c9241dc3103e2602fdd21781bb1
Stored in directory: /root/.cache/pip/wheels/ea/b2/c8/6769d8444d2f2e608fae2641833110668d0ffd1abeb2e9f3fc
Successfully built finrl yfinance pyfolio empyrical
Installing collected packages: int-date, stockstats, lxml, yfinance, stable-baselines3, empyrical, pyfolio, finrl
Found existing installation: lxml 4.2.6
Uninstalling lxml-4.2.6:
Successfully uninstalled lxml-4.2.6
Successfully installed empyrical-0.5.5 finrl-0.0.3 int-date-0.1.8 lxml-4.6.2 pyfolio-0.9.2+75.g4b901f6 stable-baselines3-0.10.0 stockstats-0.3.2 yfinance-0.1.55
###Markdown
2.2. Check if the additional packages needed are present, if not install them. * Yahoo Finance API* pandas* numpy* matplotlib* stockstats* OpenAI gym* stable-baselines* tensorflow* pyfolio
2.3. Import Packages
###Code
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.use('Agg')
import datetime
from finrl.config import config
from finrl.marketdata.yahoodownloader import YahooDownloader
from finrl.preprocessing.preprocessors import FeatureEngineer
from finrl.preprocessing.data import data_split
from finrl.env.env_portfolio import StockPortfolioEnv
from finrl.model.models import DRLAgent
from finrl.trade.backtest import backtest_stats, backtest_plot, get_daily_return, get_baseline,convert_daily_return_to_pyfolio_ts
import sys
sys.path.append("../FinRL-Library")
###Output
/usr/local/lib/python3.7/dist-packages/pyfolio/pos.py:27: UserWarning: Module "zipline.assets" not found; multipliers will not be applied to position notionals.
'Module "zipline.assets" not found; multipliers will not be applied'
###Markdown
2.4. Create Folders
###Code
import os
if not os.path.exists("./" + config.DATA_SAVE_DIR):
os.makedirs("./" + config.DATA_SAVE_DIR)
if not os.path.exists("./" + config.TRAINED_MODEL_DIR):
os.makedirs("./" + config.TRAINED_MODEL_DIR)
if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR):
os.makedirs("./" + config.TENSORBOARD_LOG_DIR)
if not os.path.exists("./" + config.RESULTS_DIR):
os.makedirs("./" + config.RESULTS_DIR)
###Output
_____no_output_____
###Markdown
Part 3. Download DataYahoo Finance is a website that provides stock data, financial news, financial reports, etc. All the data provided by Yahoo Finance is free.* FinRL uses a class **YahooDownloader** to fetch data from Yahoo Finance API* Call Limit: Using the Public API (without authentication), you are limited to 2,000 requests per hour per IP (or up to a total of 48,000 requests a day).
###Code
print(config.DOW_30_TICKER)
df = YahooDownloader(start_date = '2008-01-01',
end_date = '2021-01-01',
ticker_list = config.DOW_30_TICKER).fetch_data()
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Part 4: Preprocess DataData preprocessing is a crucial step for training a high quality machine learning model. We need to check for missing data and do feature engineering in order to convert the data into a model-ready state.* Add technical indicators. In practical trading, various information needs to be taken into account, for example the historical stock prices, current holding shares, technical indicators, etc. In this article, we demonstrate two trend-following technical indicators: MACD and RSI.* Add turbulence index. Risk-aversion reflects whether an investor will choose to preserve the capital. It also influences one's trading strategy when facing different market volatility level. To control the risk in a worst-case scenario, such as financial crisis of 2007–2008, FinRL employs the financial turbulence index that measures extreme asset price fluctuation.
###Code
fe = FeatureEngineer(
use_technical_indicator=True,
use_turbulence=False,
user_defined_feature = False)
df = fe.preprocess_data(df)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Add covariance matrix as states
###Code
# add covariance matrix as states
df=df.sort_values(['date','tic'],ignore_index=True)
df.index = df.date.factorize()[0]
cov_list = []
# look back is one year
lookback=252
for i in range(lookback,len(df.index.unique())):
data_lookback = df.loc[i-lookback:i,:]
price_lookback=data_lookback.pivot_table(index = 'date',columns = 'tic', values = 'close')
return_lookback = price_lookback.pct_change().dropna()
covs = return_lookback.cov().values
cov_list.append(covs)
df_cov = pd.DataFrame({'date':df.date.unique()[lookback:],'cov_list':cov_list})
df = df.merge(df_cov, on='date')
df = df.sort_values(['date','tic']).reset_index(drop=True)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Part 5. Design EnvironmentConsidering the stochastic and interactive nature of the automated stock trading tasks, a financial task is modeled as a **Markov Decision Process (MDP)** problem. The training process involves observing stock price change, taking an action and reward's calculation to have the agent adjusting its strategy accordingly. By interacting with the environment, the trading agent will derive a trading strategy with the maximized rewards as time proceeds.Our trading environments, based on OpenAI Gym framework, simulate live stock markets with real market data according to the principle of time-driven simulation.The action space describes the allowed actions that the agent interacts with the environment. Normally, action a includes three actions: {-1, 0, 1}, where -1, 0, 1 represent selling, holding, and buying one share. Also, an action can be carried upon multiple shares. We use an action space {-k,…,-1, 0, 1, …, k}, where k denotes the number of shares to buy and -k denotes the number of shares to sell. For example, "Buy 10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or -10, respectively. The continuous action space needs to be normalized to [-1, 1], since the policy is defined on a Gaussian distribution, which needs to be normalized and symmetric. Training data split: 2009-01-01 to 2018-12-31
###Code
train = data_split(df, '2009-01-01','2019-01-01')
#trade = data_split(df, '2020-01-01', config.END_DATE)
train.head()
###Output
_____no_output_____
###Markdown
Environment for Portfolio Allocation
###Code
import numpy as np
import pandas as pd
from gym.utils import seeding
import gym
from gym import spaces
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from stable_baselines3.common.vec_env import DummyVecEnv
class StockPortfolioEnv(gym.Env):
"""A single stock trading environment for OpenAI gym
Attributes
----------
df: DataFrame
input data
stock_dim : int
number of unique stocks
hmax : int
maximum number of shares to trade
initial_amount : int
start money
transaction_cost_pct: float
transaction cost percentage per trade
reward_scaling: float
scaling factor for reward, good for training
state_space: int
the dimension of input features
action_space: int
equals stock dimension
tech_indicator_list: list
a list of technical indicator names
turbulence_threshold: int
a threshold to control risk aversion
day: int
an increment number to control date
Methods
-------
_sell_stock()
perform sell action based on the sign of the action
_buy_stock()
perform buy action based on the sign of the action
step()
at each step the agent will return actions, then
we will calculate the reward, and return the next observation.
reset()
reset the environment
render()
use render to return other functions
save_asset_memory()
return account value at each time step
save_action_memory()
return actions/positions at each time step
"""
metadata = {'render.modes': ['human']}
def __init__(self,
df,
stock_dim,
hmax,
initial_amount,
transaction_cost_pct,
reward_scaling,
state_space,
action_space,
tech_indicator_list,
turbulence_threshold=None,
lookback=252,
day = 0):
#super(StockEnv, self).__init__()
#money = 10 , scope = 1
self.day = day
self.lookback=lookback
self.df = df
self.stock_dim = stock_dim
self.hmax = hmax
self.initial_amount = initial_amount
self.transaction_cost_pct =transaction_cost_pct
self.reward_scaling = reward_scaling
self.state_space = state_space
self.action_space = action_space
self.tech_indicator_list = tech_indicator_list
# action_space normalization and shape is self.stock_dim
self.action_space = spaces.Box(low = 0, high = 1,shape = (self.action_space,))
# Shape = (34, 30)
# covariance matrix + technical indicators
self.observation_space = spaces.Box(low=-np.inf, high=np.inf, shape = (self.state_space+len(self.tech_indicator_list),self.state_space))
# load data from a pandas dataframe
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.terminal = False
self.turbulence_threshold = turbulence_threshold
# initalize state: inital portfolio return + individual stock return + individual weights
self.portfolio_value = self.initial_amount
# memorize portfolio value each step
self.asset_memory = [self.initial_amount]
# memorize portfolio return each step
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
def step(self, actions):
# print(self.day)
self.terminal = self.day >= len(self.df.index.unique())-1
# print(actions)
if self.terminal:
df = pd.DataFrame(self.portfolio_return_memory)
df.columns = ['daily_return']
plt.plot(df.daily_return.cumsum(),'r')
plt.savefig('results/cumulative_reward.png')
plt.close()
plt.plot(self.portfolio_return_memory,'r')
plt.savefig('results/rewards.png')
plt.close()
print("=================================")
print("begin_total_asset:{}".format(self.asset_memory[0]))
print("end_total_asset:{}".format(self.portfolio_value))
df_daily_return = pd.DataFrame(self.portfolio_return_memory)
df_daily_return.columns = ['daily_return']
if df_daily_return['daily_return'].std() !=0:
sharpe = (252**0.5)*df_daily_return['daily_return'].mean()/ \
df_daily_return['daily_return'].std()
print("Sharpe: ",sharpe)
print("=================================")
return self.state, self.reward, self.terminal,{}
else:
#print("Model actions: ",actions)
# actions are the portfolio weight
# normalize to sum of 1
#if (np.array(actions) - np.array(actions).min()).sum() != 0:
# norm_actions = (np.array(actions) - np.array(actions).min()) / (np.array(actions) - np.array(actions).min()).sum()
#else:
# norm_actions = actions
weights = self.softmax_normalization(actions)
#print("Normalized actions: ", weights)
self.actions_memory.append(weights)
last_day_memory = self.data
#load next state
self.day += 1
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
#print(self.state)
# calcualte portfolio return
# individual stocks' return * weight
portfolio_return = sum(((self.data.close.values / last_day_memory.close.values)-1)*weights)
# update portfolio value
new_portfolio_value = self.portfolio_value*(1+portfolio_return)
self.portfolio_value = new_portfolio_value
# save into memory
self.portfolio_return_memory.append(portfolio_return)
self.date_memory.append(self.data.date.unique()[0])
self.asset_memory.append(new_portfolio_value)
# the reward is the new portfolio value or end portfolo value
self.reward = new_portfolio_value
#print("Step reward: ", self.reward)
#self.reward = self.reward*self.reward_scaling
return self.state, self.reward, self.terminal, {}
def reset(self):
self.asset_memory = [self.initial_amount]
self.day = 0
self.data = self.df.loc[self.day,:]
# load states
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.portfolio_value = self.initial_amount
#self.cost = 0
#self.trades = 0
self.terminal = False
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
return self.state
def render(self, mode='human'):
return self.state
def softmax_normalization(self, actions):
numerator = np.exp(actions)
denominator = np.sum(np.exp(actions))
softmax_output = numerator/denominator
return softmax_output
def save_asset_memory(self):
date_list = self.date_memory
portfolio_return = self.portfolio_return_memory
#print(len(date_list))
#print(len(asset_list))
df_account_value = pd.DataFrame({'date':date_list,'daily_return':portfolio_return})
return df_account_value
def save_action_memory(self):
# date and close price length must match actions length
date_list = self.date_memory
df_date = pd.DataFrame(date_list)
df_date.columns = ['date']
action_list = self.actions_memory
df_actions = pd.DataFrame(action_list)
df_actions.columns = self.data.tic.values
df_actions.index = df_date.date
#df_actions = pd.DataFrame({'date':date_list,'actions':action_list})
return df_actions
def _seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def get_sb_env(self):
e = DummyVecEnv([lambda: self])
obs = e.reset()
return e, obs
stock_dimension = len(train.tic.unique())
state_space = stock_dimension
print(f"Stock Dimension: {stock_dimension}, State Space: {state_space}")
env_kwargs = {
"hmax": 100,
"initial_amount": 1000000,
"transaction_cost_pct": 0.001,
"state_space": state_space,
"stock_dim": stock_dimension,
"tech_indicator_list": config.TECHNICAL_INDICATORS_LIST,
"action_space": stock_dimension,
"reward_scaling": 1e-4
}
e_train_gym = StockPortfolioEnv(df = train, **env_kwargs)
env_train, _ = e_train_gym.get_sb_env()
print(type(env_train))
###Output
<class 'stable_baselines3.common.vec_env.dummy_vec_env.DummyVecEnv'>
###Markdown
Part 6: Implement DRL Algorithms
* The implementation of the DRL algorithms are based on **OpenAI Baselines** and **Stable Baselines**. Stable Baselines is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups.
* FinRL library includes fine-tuned standard DRL algorithms, such as DQN, DDPG,
Multi-Agent DDPG, PPO, SAC, A2C and TD3. We also allow users to
design their own DRL algorithms by adapting these DRL algorithms.
###Code
# initialize
agent = DRLAgent(env = env_train)
###Output
_____no_output_____
###Markdown
Model 1: **A2C**
###Code
agent = DRLAgent(env = env_train)
A2C_PARAMS = {"n_steps": 5, "ent_coef": 0.005, "learning_rate": 0.0002}
model_a2c = agent.get_model(model_name="a2c",model_kwargs = A2C_PARAMS)
trained_a2c = agent.train_model(model=model_a2c,
tb_log_name='a2c',
total_timesteps=60000)
###Output
Logging to tensorboard_log/a2c/a2c_1
-------------------------------------
| time/ | |
| fps | 130 |
| iterations | 100 |
| time_elapsed | 3 |
| total_timesteps | 500 |
| train/ | |
| entropy_loss | -42.5 |
| explained_variance | -4.23e+15 |
| learning_rate | 0.0002 |
| n_updates | 99 |
| policy_loss | 1.8e+08 |
| std | 0.997 |
| value_loss | 2.48e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 157 |
| iterations | 200 |
| time_elapsed | 6 |
| total_timesteps | 1000 |
| train/ | |
| entropy_loss | -42.5 |
| explained_variance | -7.89e+14 |
| learning_rate | 0.0002 |
| n_updates | 199 |
| policy_loss | 2.44e+08 |
| std | 0.997 |
| value_loss | 4.08e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 167 |
| iterations | 300 |
| time_elapsed | 8 |
| total_timesteps | 1500 |
| train/ | |
| entropy_loss | -42.5 |
| explained_variance | -9.77e+25 |
| learning_rate | 0.0002 |
| n_updates | 299 |
| policy_loss | 4.02e+08 |
| std | 0.997 |
| value_loss | 9.82e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 179 |
| iterations | 400 |
| time_elapsed | 11 |
| total_timesteps | 2000 |
| train/ | |
| entropy_loss | -42.5 |
| explained_variance | -6.9e+16 |
| learning_rate | 0.0002 |
| n_updates | 399 |
| policy_loss | 4.57e+08 |
| std | 0.997 |
| value_loss | 1.39e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 189 |
| iterations | 500 |
| time_elapsed | 13 |
| total_timesteps | 2500 |
| train/ | |
| entropy_loss | -42.5 |
| explained_variance | -4.81e+17 |
| learning_rate | 0.0002 |
| n_updates | 499 |
| policy_loss | 6.13e+08 |
| std | 0.996 |
| value_loss | 2.53e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4550666.315740787
Sharpe: 1.0302838133559835
=================================
------------------------------------
| time/ | |
| fps | 192 |
| iterations | 600 |
| time_elapsed | 15 |
| total_timesteps | 3000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 599 |
| policy_loss | 1.96e+08 |
| std | 0.996 |
| value_loss | 2.53e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 197 |
| iterations | 700 |
| time_elapsed | 17 |
| total_timesteps | 3500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | -2.18e+17 |
| learning_rate | 0.0002 |
| n_updates | 699 |
| policy_loss | 2.37e+08 |
| std | 0.996 |
| value_loss | 4.06e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 202 |
| iterations | 800 |
| time_elapsed | 19 |
| total_timesteps | 4000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 799 |
| policy_loss | 3.7e+08 |
| std | 0.995 |
| value_loss | 1.01e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 206 |
| iterations | 900 |
| time_elapsed | 21 |
| total_timesteps | 4500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 899 |
| policy_loss | 4.31e+08 |
| std | 0.995 |
| value_loss | 1.29e+14 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 208 |
| iterations | 1000 |
| time_elapsed | 23 |
| total_timesteps | 5000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | -1.18e+18 |
| learning_rate | 0.0002 |
| n_updates | 999 |
| policy_loss | 6.01e+08 |
| std | 0.995 |
| value_loss | 2.52e+14 |
-------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4538927.251756459
Sharpe: 1.0239597239761906
=================================
------------------------------------
| time/ | |
| fps | 209 |
| iterations | 1100 |
| time_elapsed | 26 |
| total_timesteps | 5500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 1099 |
| policy_loss | 2.02e+08 |
| std | 0.995 |
| value_loss | 2.44e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 211 |
| iterations | 1200 |
| time_elapsed | 28 |
| total_timesteps | 6000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | -3.58e+18 |
| learning_rate | 0.0002 |
| n_updates | 1199 |
| policy_loss | 2.77e+08 |
| std | 0.995 |
| value_loss | 4.09e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 1300 |
| time_elapsed | 30 |
| total_timesteps | 6500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 1299 |
| policy_loss | 3.35e+08 |
| std | 0.994 |
| value_loss | 8.06e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 215 |
| iterations | 1400 |
| time_elapsed | 32 |
| total_timesteps | 7000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | -1.69e+20 |
| learning_rate | 0.0002 |
| n_updates | 1399 |
| policy_loss | 4.1e+08 |
| std | 0.994 |
| value_loss | 1.2e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 217 |
| iterations | 1500 |
| time_elapsed | 34 |
| total_timesteps | 7500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 1499 |
| policy_loss | 5.74e+08 |
| std | 0.994 |
| value_loss | 2.47e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4569623.286530429
Sharpe: 1.0309827263626288
=================================
-------------------------------------
| time/ | |
| fps | 217 |
| iterations | 1600 |
| time_elapsed | 36 |
| total_timesteps | 8000 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | -1.11e+24 |
| learning_rate | 0.0002 |
| n_updates | 1599 |
| policy_loss | 1.81e+08 |
| std | 0.994 |
| value_loss | 2.28e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 218 |
| iterations | 1700 |
| time_elapsed | 38 |
| total_timesteps | 8500 |
| train/ | |
| entropy_loss | -42.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 1699 |
| policy_loss | 2.6e+08 |
| std | 0.993 |
| value_loss | 4.54e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 216 |
| iterations | 1800 |
| time_elapsed | 41 |
| total_timesteps | 9000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 1799 |
| policy_loss | 3.57e+08 |
| std | 0.993 |
| value_loss | 9.62e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 216 |
| iterations | 1900 |
| time_elapsed | 43 |
| total_timesteps | 9500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | -6.95e+20 |
| learning_rate | 0.0002 |
| n_updates | 1899 |
| policy_loss | 4.08e+08 |
| std | 0.992 |
| value_loss | 1.33e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 216 |
| iterations | 2000 |
| time_elapsed | 46 |
| total_timesteps | 10000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 1999 |
| policy_loss | 7.22e+08 |
| std | 0.991 |
| value_loss | 3.02e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4784563.101868668
Sharpe: 1.0546332869946304
=================================
------------------------------------
| time/ | |
| fps | 216 |
| iterations | 2100 |
| time_elapsed | 48 |
| total_timesteps | 10500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2099 |
| policy_loss | 1.64e+08 |
| std | 0.991 |
| value_loss | 2.02e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 217 |
| iterations | 2200 |
| time_elapsed | 50 |
| total_timesteps | 11000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2199 |
| policy_loss | 2.31e+08 |
| std | 0.99 |
| value_loss | 3.61e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 218 |
| iterations | 2300 |
| time_elapsed | 52 |
| total_timesteps | 11500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2299 |
| policy_loss | 3.07e+08 |
| std | 0.99 |
| value_loss | 7.81e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 219 |
| iterations | 2400 |
| time_elapsed | 54 |
| total_timesteps | 12000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2399 |
| policy_loss | 4.03e+08 |
| std | 0.99 |
| value_loss | 1.05e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 220 |
| iterations | 2500 |
| time_elapsed | 56 |
| total_timesteps | 12500 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2499 |
| policy_loss | 5.57e+08 |
| std | 0.99 |
| value_loss | 2.27e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4265807.380536508
Sharpe: 0.9867782700137868
=================================
-------------------------------------
| time/ | |
| fps | 219 |
| iterations | 2600 |
| time_elapsed | 59 |
| total_timesteps | 13000 |
| train/ | |
| entropy_loss | -42.3 |
| explained_variance | -3.35e+20 |
| learning_rate | 0.0002 |
| n_updates | 2599 |
| policy_loss | 1.62e+08 |
| std | 0.989 |
| value_loss | 1.89e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 220 |
| iterations | 2700 |
| time_elapsed | 61 |
| total_timesteps | 13500 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2699 |
| policy_loss | 2.56e+08 |
| std | 0.989 |
| value_loss | 4.37e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 221 |
| iterations | 2800 |
| time_elapsed | 63 |
| total_timesteps | 14000 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2799 |
| policy_loss | 3.57e+08 |
| std | 0.989 |
| value_loss | 9.53e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 221 |
| iterations | 2900 |
| time_elapsed | 65 |
| total_timesteps | 14500 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2899 |
| policy_loss | 4.31e+08 |
| std | 0.988 |
| value_loss | 1.42e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 222 |
| iterations | 3000 |
| time_elapsed | 67 |
| total_timesteps | 15000 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 2999 |
| policy_loss | 6.16e+08 |
| std | 0.988 |
| value_loss | 2.68e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4737187.266470802
Sharpe: 1.048554781654813
=================================
------------------------------------
| time/ | |
| fps | 222 |
| iterations | 3100 |
| time_elapsed | 69 |
| total_timesteps | 15500 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3099 |
| policy_loss | 1.57e+08 |
| std | 0.988 |
| value_loss | 1.96e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 222 |
| iterations | 3200 |
| time_elapsed | 71 |
| total_timesteps | 16000 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3199 |
| policy_loss | 2.45e+08 |
| std | 0.988 |
| value_loss | 3.58e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 223 |
| iterations | 3300 |
| time_elapsed | 73 |
| total_timesteps | 16500 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3299 |
| policy_loss | 3.71e+08 |
| std | 0.987 |
| value_loss | 8.38e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 223 |
| iterations | 3400 |
| time_elapsed | 75 |
| total_timesteps | 17000 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3399 |
| policy_loss | 3.89e+08 |
| std | 0.987 |
| value_loss | 1.19e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 223 |
| iterations | 3500 |
| time_elapsed | 78 |
| total_timesteps | 17500 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3499 |
| policy_loss | 5.47e+08 |
| std | 0.987 |
| value_loss | 2.32e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4594345.465329124
Sharpe: 1.0338662249918555
=================================
-------------------------------------
| time/ | |
| fps | 223 |
| iterations | 3600 |
| time_elapsed | 80 |
| total_timesteps | 18000 |
| train/ | |
| entropy_loss | -42.2 |
| explained_variance | -2.39e+23 |
| learning_rate | 0.0002 |
| n_updates | 3599 |
| policy_loss | 1.56e+08 |
| std | 0.987 |
| value_loss | 1.98e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 3700 |
| time_elapsed | 82 |
| total_timesteps | 18500 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3699 |
| policy_loss | 2.45e+08 |
| std | 0.986 |
| value_loss | 3.78e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 224 |
| iterations | 3800 |
| time_elapsed | 84 |
| total_timesteps | 19000 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | -1.11e+24 |
| learning_rate | 0.0002 |
| n_updates | 3799 |
| policy_loss | 3.75e+08 |
| std | 0.986 |
| value_loss | 9.09e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 3900 |
| time_elapsed | 86 |
| total_timesteps | 19500 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3899 |
| policy_loss | 4.23e+08 |
| std | 0.986 |
| value_loss | 1.09e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 225 |
| iterations | 4000 |
| time_elapsed | 88 |
| total_timesteps | 20000 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 3999 |
| policy_loss | 5.46e+08 |
| std | 0.985 |
| value_loss | 2.21e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4537629.671792137
Sharpe: 1.027306122996326
=================================
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 4100 |
| time_elapsed | 91 |
| total_timesteps | 20500 |
| train/ | |
| entropy_loss | -42.1 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4099 |
| policy_loss | 1.76e+08 |
| std | 0.985 |
| value_loss | 1.96e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 225 |
| iterations | 4200 |
| time_elapsed | 93 |
| total_timesteps | 21000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | -4.27e+23 |
| learning_rate | 0.0002 |
| n_updates | 4199 |
| policy_loss | 2.17e+08 |
| std | 0.983 |
| value_loss | 3.5e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 225 |
| iterations | 4300 |
| time_elapsed | 95 |
| total_timesteps | 21500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | -9.61e+23 |
| learning_rate | 0.0002 |
| n_updates | 4299 |
| policy_loss | 3.36e+08 |
| std | 0.982 |
| value_loss | 7.88e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 225 |
| iterations | 4400 |
| time_elapsed | 97 |
| total_timesteps | 22000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4399 |
| policy_loss | 3.9e+08 |
| std | 0.982 |
| value_loss | 1.09e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 226 |
| iterations | 4500 |
| time_elapsed | 99 |
| total_timesteps | 22500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4499 |
| policy_loss | 5.96e+08 |
| std | 0.982 |
| value_loss | 2.24e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4641050.148925118
Sharpe: 1.035206741352005
=================================
------------------------------------
| time/ | |
| fps | 226 |
| iterations | 4600 |
| time_elapsed | 101 |
| total_timesteps | 23000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4599 |
| policy_loss | 1.86e+08 |
| std | 0.981 |
| value_loss | 2.04e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 226 |
| iterations | 4700 |
| time_elapsed | 103 |
| total_timesteps | 23500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4699 |
| policy_loss | 2.4e+08 |
| std | 0.981 |
| value_loss | 4.09e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 226 |
| iterations | 4800 |
| time_elapsed | 105 |
| total_timesteps | 24000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4799 |
| policy_loss | 3.69e+08 |
| std | 0.981 |
| value_loss | 9.69e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 226 |
| iterations | 4900 |
| time_elapsed | 108 |
| total_timesteps | 24500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | -5.9e+21 |
| learning_rate | 0.0002 |
| n_updates | 4899 |
| policy_loss | 4.46e+08 |
| std | 0.98 |
| value_loss | 1.36e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 226 |
| iterations | 5000 |
| time_elapsed | 110 |
| total_timesteps | 25000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 4999 |
| policy_loss | 6.05e+08 |
| std | 0.98 |
| value_loss | 2.56e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5080677.099515816
Sharpe: 1.0970818985375046
=================================
------------------------------------
| time/ | |
| fps | 225 |
| iterations | 5100 |
| time_elapsed | 113 |
| total_timesteps | 25500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5099 |
| policy_loss | 1.7e+08 |
| std | 0.98 |
| value_loss | 2.24e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5200 |
| time_elapsed | 115 |
| total_timesteps | 26000 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5199 |
| policy_loss | 2.39e+08 |
| std | 0.98 |
| value_loss | 3.92e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5300 |
| time_elapsed | 117 |
| total_timesteps | 26500 |
| train/ | |
| entropy_loss | -42 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5299 |
| policy_loss | 3.24e+08 |
| std | 0.98 |
| value_loss | 8.04e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5400 |
| time_elapsed | 120 |
| total_timesteps | 27000 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | -4.8e+21 |
| learning_rate | 0.0002 |
| n_updates | 5399 |
| policy_loss | 4.29e+08 |
| std | 0.979 |
| value_loss | 1.22e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5500 |
| time_elapsed | 122 |
| total_timesteps | 27500 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5499 |
| policy_loss | 5.4e+08 |
| std | 0.979 |
| value_loss | 2.31e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4811657.503165074
Sharpe: 1.0589276474603557
=================================
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5600 |
| time_elapsed | 124 |
| total_timesteps | 28000 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5599 |
| policy_loss | 1.71e+08 |
| std | 0.978 |
| value_loss | 2.12e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5700 |
| time_elapsed | 126 |
| total_timesteps | 28500 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5699 |
| policy_loss | 2.15e+08 |
| std | 0.978 |
| value_loss | 3.76e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5800 |
| time_elapsed | 129 |
| total_timesteps | 29000 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5799 |
| policy_loss | 3.25e+08 |
| std | 0.978 |
| value_loss | 7.21e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 5900 |
| time_elapsed | 131 |
| total_timesteps | 29500 |
| train/ | |
| entropy_loss | -41.9 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5899 |
| policy_loss | 3.48e+08 |
| std | 0.977 |
| value_loss | 9.82e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 225 |
| iterations | 6000 |
| time_elapsed | 133 |
| total_timesteps | 30000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 5999 |
| policy_loss | 5.64e+08 |
| std | 0.976 |
| value_loss | 2.13e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4485060.270775738
Sharpe: 1.01141473877631
=================================
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6100 |
| time_elapsed | 135 |
| total_timesteps | 30500 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6099 |
| policy_loss | 1.76e+08 |
| std | 0.976 |
| value_loss | 2.21e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6200 |
| time_elapsed | 137 |
| total_timesteps | 31000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6199 |
| policy_loss | 2.37e+08 |
| std | 0.976 |
| value_loss | 3.86e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6300 |
| time_elapsed | 140 |
| total_timesteps | 31500 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6299 |
| policy_loss | 3.28e+08 |
| std | 0.975 |
| value_loss | 7.7e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6400 |
| time_elapsed | 142 |
| total_timesteps | 32000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6399 |
| policy_loss | 4.03e+08 |
| std | 0.975 |
| value_loss | 1.03e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6500 |
| time_elapsed | 144 |
| total_timesteps | 32500 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6499 |
| policy_loss | 5.93e+08 |
| std | 0.975 |
| value_loss | 2.38e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4716704.9549536165
Sharpe: 1.0510500905659037
=================================
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6600 |
| time_elapsed | 147 |
| total_timesteps | 33000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6599 |
| policy_loss | 1.78e+08 |
| std | 0.975 |
| value_loss | 2.04e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6700 |
| time_elapsed | 149 |
| total_timesteps | 33500 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6699 |
| policy_loss | 2.4e+08 |
| std | 0.974 |
| value_loss | 3.85e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6800 |
| time_elapsed | 151 |
| total_timesteps | 34000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | -1.16e+24 |
| learning_rate | 0.0002 |
| n_updates | 6799 |
| policy_loss | 3.2e+08 |
| std | 0.974 |
| value_loss | 7.66e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 6900 |
| time_elapsed | 153 |
| total_timesteps | 34500 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6899 |
| policy_loss | 3.45e+08 |
| std | 0.973 |
| value_loss | 9.59e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 7000 |
| time_elapsed | 155 |
| total_timesteps | 35000 |
| train/ | |
| entropy_loss | -41.8 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 6999 |
| policy_loss | 6.22e+08 |
| std | 0.973 |
| value_loss | 2.58e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4722061.646242311
Sharpe: 1.0529486633467167
=================================
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 7100 |
| time_elapsed | 158 |
| total_timesteps | 35500 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7099 |
| policy_loss | 1.63e+08 |
| std | 0.973 |
| value_loss | 1.91e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 7200 |
| time_elapsed | 160 |
| total_timesteps | 36000 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7199 |
| policy_loss | 2.26e+08 |
| std | 0.973 |
| value_loss | 3.43e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 224 |
| iterations | 7300 |
| time_elapsed | 162 |
| total_timesteps | 36500 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7299 |
| policy_loss | 3.31e+08 |
| std | 0.972 |
| value_loss | 7.69e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 223 |
| iterations | 7400 |
| time_elapsed | 165 |
| total_timesteps | 37000 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7399 |
| policy_loss | 3.65e+08 |
| std | 0.971 |
| value_loss | 9.37e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 222 |
| iterations | 7500 |
| time_elapsed | 168 |
| total_timesteps | 37500 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7499 |
| policy_loss | 5.72e+08 |
| std | 0.971 |
| value_loss | 2.37e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4651172.332054012
Sharpe: 1.0366825368944979
=================================
------------------------------------
| time/ | |
| fps | 221 |
| iterations | 7600 |
| time_elapsed | 171 |
| total_timesteps | 38000 |
| train/ | |
| entropy_loss | -41.7 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7599 |
| policy_loss | 1.71e+08 |
| std | 0.971 |
| value_loss | 2e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 220 |
| iterations | 7700 |
| time_elapsed | 174 |
| total_timesteps | 38500 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7699 |
| policy_loss | 2e+08 |
| std | 0.97 |
| value_loss | 3.27e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 219 |
| iterations | 7800 |
| time_elapsed | 177 |
| total_timesteps | 39000 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | -2.5e+23 |
| learning_rate | 0.0002 |
| n_updates | 7799 |
| policy_loss | 3.23e+08 |
| std | 0.969 |
| value_loss | 8.21e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 218 |
| iterations | 7900 |
| time_elapsed | 181 |
| total_timesteps | 39500 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | -3.76e+23 |
| learning_rate | 0.0002 |
| n_updates | 7899 |
| policy_loss | 4.25e+08 |
| std | 0.969 |
| value_loss | 1.23e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 216 |
| iterations | 8000 |
| time_elapsed | 184 |
| total_timesteps | 40000 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 7999 |
| policy_loss | 5.93e+08 |
| std | 0.969 |
| value_loss | 2.54e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5004208.576042484
Sharpe: 1.0844189746438444
=================================
------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8100 |
| time_elapsed | 187 |
| total_timesteps | 40500 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 8099 |
| policy_loss | 1.66e+08 |
| std | 0.969 |
| value_loss | 2e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8200 |
| time_elapsed | 189 |
| total_timesteps | 41000 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | -9.41e+22 |
| learning_rate | 0.0002 |
| n_updates | 8199 |
| policy_loss | 2.17e+08 |
| std | 0.969 |
| value_loss | 3.1e+13 |
-------------------------------------
-------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8300 |
| time_elapsed | 192 |
| total_timesteps | 41500 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | -2.31e+23 |
| learning_rate | 0.0002 |
| n_updates | 8299 |
| policy_loss | 3.37e+08 |
| std | 0.968 |
| value_loss | 7.5e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8400 |
| time_elapsed | 194 |
| total_timesteps | 42000 |
| train/ | |
| entropy_loss | -41.6 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 8399 |
| policy_loss | 3.99e+08 |
| std | 0.967 |
| value_loss | 1.15e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8500 |
| time_elapsed | 197 |
| total_timesteps | 42500 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 8499 |
| policy_loss | 5.83e+08 |
| std | 0.967 |
| value_loss | 2.03e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4690651.093610478
Sharpe: 1.0439707122222264
=================================
-------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8600 |
| time_elapsed | 199 |
| total_timesteps | 43000 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | -1.44e+21 |
| learning_rate | 0.0002 |
| n_updates | 8599 |
| policy_loss | 1.58e+08 |
| std | 0.967 |
| value_loss | 1.95e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8700 |
| time_elapsed | 202 |
| total_timesteps | 43500 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 8699 |
| policy_loss | 2.11e+08 |
| std | 0.966 |
| value_loss | 3.08e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 215 |
| iterations | 8800 |
| time_elapsed | 204 |
| total_timesteps | 44000 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 8799 |
| policy_loss | 3.28e+08 |
| std | 0.965 |
| value_loss | 7.03e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 214 |
| iterations | 8900 |
| time_elapsed | 207 |
| total_timesteps | 44500 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | -3.36e+23 |
| learning_rate | 0.0002 |
| n_updates | 8899 |
| policy_loss | 4.06e+08 |
| std | 0.965 |
| value_loss | 1.1e+14 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9000 |
| time_elapsed | 210 |
| total_timesteps | 45000 |
| train/ | |
| entropy_loss | -41.5 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 8999 |
| policy_loss | 5.2e+08 |
| std | 0.964 |
| value_loss | 1.98e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4660061.433540329
Sharpe: 1.04048695684595
=================================
-------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9100 |
| time_elapsed | 213 |
| total_timesteps | 45500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | -1.77e+21 |
| learning_rate | 0.0002 |
| n_updates | 9099 |
| policy_loss | 1.62e+08 |
| std | 0.964 |
| value_loss | 1.83e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9200 |
| time_elapsed | 215 |
| total_timesteps | 46000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9199 |
| policy_loss | 2.01e+08 |
| std | 0.964 |
| value_loss | 2.87e+13 |
------------------------------------
-------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9300 |
| time_elapsed | 217 |
| total_timesteps | 46500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | -2.13e+23 |
| learning_rate | 0.0002 |
| n_updates | 9299 |
| policy_loss | 3.31e+08 |
| std | 0.963 |
| value_loss | 7e+13 |
-------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9400 |
| time_elapsed | 220 |
| total_timesteps | 47000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9399 |
| policy_loss | 4.06e+08 |
| std | 0.963 |
| value_loss | 1.1e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9500 |
| time_elapsed | 222 |
| total_timesteps | 47500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9499 |
| policy_loss | 5.33e+08 |
| std | 0.962 |
| value_loss | 2.11e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4841177.689704771
Sharpe: 1.0662304642107994
=================================
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9600 |
| time_elapsed | 224 |
| total_timesteps | 48000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9599 |
| policy_loss | 1.42e+08 |
| std | 0.962 |
| value_loss | 1.54e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9700 |
| time_elapsed | 226 |
| total_timesteps | 48500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9699 |
| policy_loss | 1.72e+08 |
| std | 0.961 |
| value_loss | 2.54e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9800 |
| time_elapsed | 229 |
| total_timesteps | 49000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9799 |
| policy_loss | 3.05e+08 |
| std | 0.961 |
| value_loss | 6.27e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 213 |
| iterations | 9900 |
| time_elapsed | 232 |
| total_timesteps | 49500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9899 |
| policy_loss | 3.52e+08 |
| std | 0.962 |
| value_loss | 9.87e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10000 |
| time_elapsed | 234 |
| total_timesteps | 50000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 9999 |
| policy_loss | 4.99e+08 |
| std | 0.962 |
| value_loss | 1.98e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4829593.807900699
Sharpe: 1.0662441117803074
=================================
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10100 |
| time_elapsed | 237 |
| total_timesteps | 50500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10099 |
| policy_loss | 1.41e+08 |
| std | 0.962 |
| value_loss | 1.59e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10200 |
| time_elapsed | 239 |
| total_timesteps | 51000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10199 |
| policy_loss | 1.88e+08 |
| std | 0.961 |
| value_loss | 2.59e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10300 |
| time_elapsed | 242 |
| total_timesteps | 51500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10299 |
| policy_loss | 3.11e+08 |
| std | 0.961 |
| value_loss | 5.9e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10400 |
| time_elapsed | 244 |
| total_timesteps | 52000 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10399 |
| policy_loss | 3.57e+08 |
| std | 0.961 |
| value_loss | 9.64e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10500 |
| time_elapsed | 246 |
| total_timesteps | 52500 |
| train/ | |
| entropy_loss | -41.4 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10499 |
| policy_loss | 4.69e+08 |
| std | 0.961 |
| value_loss | 1.89e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4867642.492651795
Sharpe: 1.0695800575241914
=================================
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10600 |
| time_elapsed | 249 |
| total_timesteps | 53000 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10599 |
| policy_loss | 1.44e+08 |
| std | 0.96 |
| value_loss | 1.48e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10700 |
| time_elapsed | 251 |
| total_timesteps | 53500 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10699 |
| policy_loss | 1.9e+08 |
| std | 0.96 |
| value_loss | 2.62e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10800 |
| time_elapsed | 253 |
| total_timesteps | 54000 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10799 |
| policy_loss | 3.1e+08 |
| std | 0.959 |
| value_loss | 6.5e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 10900 |
| time_elapsed | 256 |
| total_timesteps | 54500 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10899 |
| policy_loss | 3.56e+08 |
| std | 0.959 |
| value_loss | 1.09e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 11000 |
| time_elapsed | 258 |
| total_timesteps | 55000 |
| train/ | |
| entropy_loss | -41.3 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 10999 |
| policy_loss | 4.86e+08 |
| std | 0.958 |
| value_loss | 1.8e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4722117.849533835
Sharpe: 1.0511916286251552
=================================
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 11100 |
| time_elapsed | 261 |
| total_timesteps | 55500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11099 |
| policy_loss | 1.37e+08 |
| std | 0.957 |
| value_loss | 1.42e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 11200 |
| time_elapsed | 263 |
| total_timesteps | 56000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11199 |
| policy_loss | 2.17e+08 |
| std | 0.956 |
| value_loss | 3.5e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 11300 |
| time_elapsed | 265 |
| total_timesteps | 56500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11299 |
| policy_loss | 3.17e+08 |
| std | 0.957 |
| value_loss | 7.01e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 212 |
| iterations | 11400 |
| time_elapsed | 268 |
| total_timesteps | 57000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11399 |
| policy_loss | 3.67e+08 |
| std | 0.956 |
| value_loss | 1.15e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 211 |
| iterations | 11500 |
| time_elapsed | 271 |
| total_timesteps | 57500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11499 |
| policy_loss | 5.1e+08 |
| std | 0.956 |
| value_loss | 1.78e+14 |
------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4803878.457147342
Sharpe: 1.0585455233591723
=================================
------------------------------------
| time/ | |
| fps | 211 |
| iterations | 11600 |
| time_elapsed | 274 |
| total_timesteps | 58000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11599 |
| policy_loss | 1.22e+08 |
| std | 0.956 |
| value_loss | 1.16e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 211 |
| iterations | 11700 |
| time_elapsed | 276 |
| total_timesteps | 58500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11699 |
| policy_loss | 2.17e+08 |
| std | 0.956 |
| value_loss | 3.15e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 211 |
| iterations | 11800 |
| time_elapsed | 279 |
| total_timesteps | 59000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11799 |
| policy_loss | 3.13e+08 |
| std | 0.956 |
| value_loss | 6.62e+13 |
------------------------------------
------------------------------------
| time/ | |
| fps | 211 |
| iterations | 11900 |
| time_elapsed | 281 |
| total_timesteps | 59500 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11899 |
| policy_loss | 4.11e+08 |
| std | 0.956 |
| value_loss | 1.2e+14 |
------------------------------------
------------------------------------
| time/ | |
| fps | 211 |
| iterations | 12000 |
| time_elapsed | 283 |
| total_timesteps | 60000 |
| train/ | |
| entropy_loss | -41.2 |
| explained_variance | nan |
| learning_rate | 0.0002 |
| n_updates | 11999 |
| policy_loss | 5.16e+08 |
| std | 0.956 |
| value_loss | 1.93e+14 |
------------------------------------
###Markdown
Model 2: **PPO**
###Code
agent = DRLAgent(env = env_train)
PPO_PARAMS = {
"n_steps": 2048,
"ent_coef": 0.005,
"learning_rate": 0.0001,
"batch_size": 128,
}
model_ppo = agent.get_model("ppo",model_kwargs = PPO_PARAMS)
trained_ppo = agent.train_model(model=model_ppo,
tb_log_name='ppo',
total_timesteps=80000)
###Output
Logging to tensorboard_log/ppo/ppo_3
-----------------------------
| time/ | |
| fps | 458 |
| iterations | 1 |
| time_elapsed | 4 |
| total_timesteps | 2048 |
-----------------------------
=================================
begin_total_asset:1000000
end_total_asset:4917364.6278486075
Sharpe: 1.074414829116363
=================================
--------------------------------------------
| time/ | |
| fps | 391 |
| iterations | 2 |
| time_elapsed | 10 |
| total_timesteps | 4096 |
| train/ | |
| approx_kl | -7.8231096e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.71e+14 |
| learning_rate | 0.0001 |
| loss | 7.78e+14 |
| n_updates | 10 |
| policy_gradient_loss | -6.16e-07 |
| std | 1 |
| value_loss | 1.57e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4996331.100586685
Sharpe: 1.0890927964884638
=================================
--------------------------------------------
| time/ | |
| fps | 373 |
| iterations | 3 |
| time_elapsed | 16 |
| total_timesteps | 6144 |
| train/ | |
| approx_kl | -3.5390258e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -8.76e+14 |
| learning_rate | 0.0001 |
| loss | 1.1e+15 |
| n_updates | 20 |
| policy_gradient_loss | -4.29e-07 |
| std | 1 |
| value_loss | 2.33e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4751039.2878817525
Sharpe: 1.0560179406423764
=================================
--------------------------------------------
| time/ | |
| fps | 365 |
| iterations | 4 |
| time_elapsed | 22 |
| total_timesteps | 8192 |
| train/ | |
| approx_kl | -1.6763806e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -8.01e+15 |
| learning_rate | 0.0001 |
| loss | 1.25e+15 |
| n_updates | 30 |
| policy_gradient_loss | -5.58e-07 |
| std | 1 |
| value_loss | 2.59e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4769059.347696523
Sharpe: 1.056814654380227
=================================
--------------------------------------------
| time/ | |
| fps | 360 |
| iterations | 5 |
| time_elapsed | 28 |
| total_timesteps | 10240 |
| train/ | |
| approx_kl | -5.5879354e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -2.55e+16 |
| learning_rate | 0.0001 |
| loss | 1.24e+15 |
| n_updates | 40 |
| policy_gradient_loss | -4.9e-07 |
| std | 1 |
| value_loss | 2.7e+15 |
--------------------------------------------
--------------------------------------------
| time/ | |
| fps | 358 |
| iterations | 6 |
| time_elapsed | 34 |
| total_timesteps | 12288 |
| train/ | |
| approx_kl | 1.13621354e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -9.17e+16 |
| learning_rate | 0.0001 |
| loss | 1.35e+15 |
| n_updates | 50 |
| policy_gradient_loss | -4.28e-07 |
| std | 1 |
| value_loss | 2.77e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4816491.86007194
Sharpe: 1.0636199939613733
=================================
-------------------------------------------
| time/ | |
| fps | 356 |
| iterations | 7 |
| time_elapsed | 40 |
| total_timesteps | 14336 |
| train/ | |
| approx_kl | 3.5390258e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.42e+17 |
| learning_rate | 0.0001 |
| loss | 1.03e+15 |
| n_updates | 60 |
| policy_gradient_loss | -6.52e-07 |
| std | 1 |
| value_loss | 1.94e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4631919.83090099
Sharpe: 1.0396504731290799
=================================
-------------------------------------------
| time/ | |
| fps | 354 |
| iterations | 8 |
| time_elapsed | 46 |
| total_timesteps | 16384 |
| train/ | |
| approx_kl | 1.7508864e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.93e+17 |
| learning_rate | 0.0001 |
| loss | 9.83e+14 |
| n_updates | 70 |
| policy_gradient_loss | -5.78e-07 |
| std | 1 |
| value_loss | 2.06e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4728763.286321457
Sharpe: 1.052390302374202
=================================
-------------------------------------------
| time/ | |
| fps | 353 |
| iterations | 9 |
| time_elapsed | 52 |
| total_timesteps | 18432 |
| train/ | |
| approx_kl | 4.4703484e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.72e+18 |
| learning_rate | 0.0001 |
| loss | 1.25e+15 |
| n_updates | 80 |
| policy_gradient_loss | -4.84e-07 |
| std | 1 |
| value_loss | 2.33e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4439983.024798136
Sharpe: 1.013829383303325
=================================
--------------------------------------------
| time/ | |
| fps | 352 |
| iterations | 10 |
| time_elapsed | 58 |
| total_timesteps | 20480 |
| train/ | |
| approx_kl | -1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.7e+18 |
| learning_rate | 0.0001 |
| loss | 1.17e+15 |
| n_updates | 90 |
| policy_gradient_loss | -4.82e-07 |
| std | 1 |
| value_loss | 2.58e+15 |
--------------------------------------------
-------------------------------------------
| time/ | |
| fps | 352 |
| iterations | 11 |
| time_elapsed | 63 |
| total_timesteps | 22528 |
| train/ | |
| approx_kl | -9.313226e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.85e+18 |
| learning_rate | 0.0001 |
| loss | 1.2e+15 |
| n_updates | 100 |
| policy_gradient_loss | -5.2e-07 |
| std | 1 |
| value_loss | 2.51e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5048884.524536961
Sharpe: 1.0963911876706685
=================================
-------------------------------------------
| time/ | |
| fps | 351 |
| iterations | 12 |
| time_elapsed | 69 |
| total_timesteps | 24576 |
| train/ | |
| approx_kl | 3.7252903e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -2.67e+18 |
| learning_rate | 0.0001 |
| loss | 1.44e+15 |
| n_updates | 110 |
| policy_gradient_loss | -4.53e-07 |
| std | 1 |
| value_loss | 2.8e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4824229.456193555
Sharpe: 1.0648549464252506
=================================
-------------------------------------------
| time/ | |
| fps | 351 |
| iterations | 13 |
| time_elapsed | 75 |
| total_timesteps | 26624 |
| train/ | |
| approx_kl | 3.3527613e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.38e+18 |
| learning_rate | 0.0001 |
| loss | 7.89e+14 |
| n_updates | 120 |
| policy_gradient_loss | -6.06e-07 |
| std | 1 |
| value_loss | 1.76e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4602974.615591427
Sharpe: 1.034753433280377
=================================
-------------------------------------------
| time/ | |
| fps | 350 |
| iterations | 14 |
| time_elapsed | 81 |
| total_timesteps | 28672 |
| train/ | |
| approx_kl | 8.8475645e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.75e+19 |
| learning_rate | 0.0001 |
| loss | 1.23e+15 |
| n_updates | 130 |
| policy_gradient_loss | -5.8e-07 |
| std | 1 |
| value_loss | 2.27e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4608422.583401322
Sharpe: 1.035300880612428
=================================
-------------------------------------------
| time/ | |
| fps | 349 |
| iterations | 15 |
| time_elapsed | 87 |
| total_timesteps | 30720 |
| train/ | |
| approx_kl | 1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.71e+18 |
| learning_rate | 0.0001 |
| loss | 1.22e+15 |
| n_updates | 140 |
| policy_gradient_loss | -5.63e-07 |
| std | 1 |
| value_loss | 2.39e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4826869.636472441
Sharpe: 1.0676330284861433
=================================
--------------------------------------------
| time/ | |
| fps | 348 |
| iterations | 16 |
| time_elapsed | 94 |
| total_timesteps | 32768 |
| train/ | |
| approx_kl | -1.4901161e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.51e+19 |
| learning_rate | 0.0001 |
| loss | 1.22e+15 |
| n_updates | 150 |
| policy_gradient_loss | -5.78e-07 |
| std | 1 |
| value_loss | 2.7e+15 |
--------------------------------------------
-------------------------------------------
| time/ | |
| fps | 346 |
| iterations | 17 |
| time_elapsed | 100 |
| total_timesteps | 34816 |
| train/ | |
| approx_kl | -5.401671e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.48e+19 |
| learning_rate | 0.0001 |
| loss | 1.48e+15 |
| n_updates | 160 |
| policy_gradient_loss | -3.96e-07 |
| std | 1 |
| value_loss | 2.81e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4364006.929301854
Sharpe: 1.002176631256902
=================================
--------------------------------------------
| time/ | |
| fps | 345 |
| iterations | 18 |
| time_elapsed | 106 |
| total_timesteps | 36864 |
| train/ | |
| approx_kl | -1.0803342e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.15e+19 |
| learning_rate | 0.0001 |
| loss | 8.41e+14 |
| n_updates | 170 |
| policy_gradient_loss | -4.91e-07 |
| std | 1 |
| value_loss | 1.58e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4796634.5596691
Sharpe: 1.0678319491053092
=================================
--------------------------------------------
| time/ | |
| fps | 344 |
| iterations | 19 |
| time_elapsed | 112 |
| total_timesteps | 38912 |
| train/ | |
| approx_kl | -1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -4.21e+19 |
| learning_rate | 0.0001 |
| loss | 1.03e+15 |
| n_updates | 180 |
| policy_gradient_loss | -5.6e-07 |
| std | 1 |
| value_loss | 2.02e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4969786.413399254
Sharpe: 1.0823021486710163
=================================
--------------------------------------------
| time/ | |
| fps | 344 |
| iterations | 20 |
| time_elapsed | 118 |
| total_timesteps | 40960 |
| train/ | |
| approx_kl | -6.7055225e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.41e+19 |
| learning_rate | 0.0001 |
| loss | 1.22e+15 |
| n_updates | 190 |
| policy_gradient_loss | -2.87e-07 |
| std | 1 |
| value_loss | 2.4e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4885480.801922398
Sharpe: 1.0729451877791811
=================================
--------------------------------------------
| time/ | |
| fps | 343 |
| iterations | 21 |
| time_elapsed | 125 |
| total_timesteps | 43008 |
| train/ | |
| approx_kl | -5.5879354e-09 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.85e+19 |
| learning_rate | 0.0001 |
| loss | 1.62e+15 |
| n_updates | 200 |
| policy_gradient_loss | -5.24e-07 |
| std | 1 |
| value_loss | 2.95e+15 |
--------------------------------------------
-------------------------------------------
| time/ | |
| fps | 343 |
| iterations | 22 |
| time_elapsed | 131 |
| total_timesteps | 45056 |
| train/ | |
| approx_kl | 1.8067658e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.01e+19 |
| learning_rate | 0.0001 |
| loss | 1.34e+15 |
| n_updates | 210 |
| policy_gradient_loss | -4.62e-07 |
| std | 1 |
| value_loss | 2.93e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5613709.009268909
Sharpe: 1.1673870008513114
=================================
--------------------------------------------
| time/ | |
| fps | 342 |
| iterations | 23 |
| time_elapsed | 137 |
| total_timesteps | 47104 |
| train/ | |
| approx_kl | -2.0489097e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.72e+19 |
| learning_rate | 0.0001 |
| loss | 1.41e+15 |
| n_updates | 220 |
| policy_gradient_loss | -4.78e-07 |
| std | 1 |
| value_loss | 2.71e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:5043800.590470289
Sharpe: 1.0953673306850924
=================================
-------------------------------------------
| time/ | |
| fps | 342 |
| iterations | 24 |
| time_elapsed | 143 |
| total_timesteps | 49152 |
| train/ | |
| approx_kl | 2.4214387e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.37e+20 |
| learning_rate | 0.0001 |
| loss | 1.01e+15 |
| n_updates | 230 |
| policy_gradient_loss | -5.28e-07 |
| std | 1 |
| value_loss | 2.26e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4776576.852863929
Sharpe: 1.0593811754233755
=================================
-------------------------------------------
| time/ | |
| fps | 342 |
| iterations | 25 |
| time_elapsed | 149 |
| total_timesteps | 51200 |
| train/ | |
| approx_kl | 4.4703484e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.27e+20 |
| learning_rate | 0.0001 |
| loss | 1.21e+15 |
| n_updates | 240 |
| policy_gradient_loss | -4.82e-07 |
| std | 1 |
| value_loss | 2.46e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4468393.200157898
Sharpe: 1.0192746589767419
=================================
-------------------------------------------
| time/ | |
| fps | 341 |
| iterations | 26 |
| time_elapsed | 156 |
| total_timesteps | 53248 |
| train/ | |
| approx_kl | 2.6077032e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.96e+20 |
| learning_rate | 0.0001 |
| loss | 1.31e+15 |
| n_updates | 250 |
| policy_gradient_loss | -5.36e-07 |
| std | 1 |
| value_loss | 2.59e+15 |
-------------------------------------------
--------------------------------------------
| time/ | |
| fps | 341 |
| iterations | 27 |
| time_elapsed | 162 |
| total_timesteps | 55296 |
| train/ | |
| approx_kl | -1.3038516e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.68e+20 |
| learning_rate | 0.0001 |
| loss | 1.33e+15 |
| n_updates | 260 |
| policy_gradient_loss | -3.77e-07 |
| std | 1 |
| value_loss | 2.51e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4875234.39450474
Sharpe: 1.0721137742534572
=================================
--------------------------------------------
| time/ | |
| fps | 340 |
| iterations | 28 |
| time_elapsed | 168 |
| total_timesteps | 57344 |
| train/ | |
| approx_kl | -1.2479722e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.66e+20 |
| learning_rate | 0.0001 |
| loss | 1.59e+15 |
| n_updates | 270 |
| policy_gradient_loss | -4.61e-07 |
| std | 1 |
| value_loss | 2.8e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4600459.210918712
Sharpe: 1.034756153745345
=================================
-------------------------------------------
| time/ | |
| fps | 340 |
| iterations | 29 |
| time_elapsed | 174 |
| total_timesteps | 59392 |
| train/ | |
| approx_kl | -4.284084e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.26e+20 |
| learning_rate | 0.0001 |
| loss | 8.07e+14 |
| n_updates | 280 |
| policy_gradient_loss | -5.44e-07 |
| std | 1 |
| value_loss | 1.62e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4526188.381438201
Sharpe: 1.0293846869900876
=================================
--------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 30 |
| time_elapsed | 180 |
| total_timesteps | 61440 |
| train/ | |
| approx_kl | -2.4214387e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.44e+20 |
| learning_rate | 0.0001 |
| loss | 1.12e+15 |
| n_updates | 290 |
| policy_gradient_loss | -5.65e-07 |
| std | 1 |
| value_loss | 2.1e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4487836.803716703
Sharpe: 1.010974660894394
=================================
--------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 31 |
| time_elapsed | 187 |
| total_timesteps | 63488 |
| train/ | |
| approx_kl | -2.6077032e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -4.47e+20 |
| learning_rate | 0.0001 |
| loss | 1.14e+15 |
| n_updates | 300 |
| policy_gradient_loss | -4.8e-07 |
| std | 1 |
| value_loss | 2.25e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4480729.650671386
Sharpe: 1.0219085518652522
=================================
--------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 32 |
| time_elapsed | 193 |
| total_timesteps | 65536 |
| train/ | |
| approx_kl | -2.0302832e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.87e+20 |
| learning_rate | 0.0001 |
| loss | 1.28e+15 |
| n_updates | 310 |
| policy_gradient_loss | -4.4e-07 |
| std | 1 |
| value_loss | 2.51e+15 |
--------------------------------------------
------------------------------------------
| time/ | |
| fps | 339 |
| iterations | 33 |
| time_elapsed | 199 |
| total_timesteps | 67584 |
| train/ | |
| approx_kl | 1.359731e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -3.68e+20 |
| learning_rate | 0.0001 |
| loss | 1.24e+15 |
| n_updates | 320 |
| policy_gradient_loss | -4.51e-07 |
| std | 1 |
| value_loss | 2.66e+15 |
------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4399373.734699048
Sharpe: 1.005407087483561
=================================
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 34 |
| time_elapsed | 205 |
| total_timesteps | 69632 |
| train/ | |
| approx_kl | 2.2351742e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -2.29e+20 |
| learning_rate | 0.0001 |
| loss | 8.5e+14 |
| n_updates | 330 |
| policy_gradient_loss | -5.56e-07 |
| std | 1 |
| value_loss | 1.64e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4305742.921261859
Sharpe: 0.9945061913961891
=================================
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 35 |
| time_elapsed | 211 |
| total_timesteps | 71680 |
| train/ | |
| approx_kl | 1.3411045e-07 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.11e+20 |
| learning_rate | 0.0001 |
| loss | 7.97e+14 |
| n_updates | 340 |
| policy_gradient_loss | -6.48e-07 |
| std | 1 |
| value_loss | 1.8e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4794175.629957249
Sharpe: 1.0611635246548963
=================================
--------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 36 |
| time_elapsed | 217 |
| total_timesteps | 73728 |
| train/ | |
| approx_kl | -3.3527613e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.16e+21 |
| learning_rate | 0.0001 |
| loss | 1.07e+15 |
| n_updates | 350 |
| policy_gradient_loss | -4.82e-07 |
| std | 1 |
| value_loss | 2.06e+15 |
--------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4467487.416264421
Sharpe: 1.021012208464475
=================================
------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 37 |
| time_elapsed | 224 |
| total_timesteps | 75776 |
| train/ | |
| approx_kl | 5.401671e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -9.89e+20 |
| learning_rate | 0.0001 |
| loss | 1.46e+15 |
| n_updates | 360 |
| policy_gradient_loss | -4.78e-07 |
| std | 1 |
| value_loss | 2.75e+15 |
------------------------------------------
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 38 |
| time_elapsed | 229 |
| total_timesteps | 77824 |
| train/ | |
| approx_kl | 1.6763806e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -7.64e+20 |
| learning_rate | 0.0001 |
| loss | 1.25e+15 |
| n_updates | 370 |
| policy_gradient_loss | -4.54e-07 |
| std | 1 |
| value_loss | 2.57e+15 |
-------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4806649.219027834
Sharpe: 1.0604486398186765
=================================
------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 39 |
| time_elapsed | 236 |
| total_timesteps | 79872 |
| train/ | |
| approx_kl | 4.284084e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -6.96e+20 |
| learning_rate | 0.0001 |
| loss | 1.28e+15 |
| n_updates | 380 |
| policy_gradient_loss | -5.9e-07 |
| std | 1 |
| value_loss | 2.44e+15 |
------------------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4653147.508966551
Sharpe: 1.043189911078732
=================================
-------------------------------------------
| time/ | |
| fps | 338 |
| iterations | 40 |
| time_elapsed | 242 |
| total_timesteps | 81920 |
| train/ | |
| approx_kl | 6.3329935e-08 |
| clip_fraction | 0 |
| clip_range | 0.2 |
| entropy_loss | -42.6 |
| explained_variance | -1.04e+21 |
| learning_rate | 0.0001 |
| loss | 1.01e+15 |
| n_updates | 390 |
| policy_gradient_loss | -5.33e-07 |
| std | 1 |
| value_loss | 1.82e+15 |
-------------------------------------------
###Markdown
Model 3: **DDPG**
###Code
agent = DRLAgent(env = env_train)
DDPG_PARAMS = {"batch_size": 128, "buffer_size": 50000, "learning_rate": 0.001}
model_ddpg = agent.get_model("ddpg",model_kwargs = DDPG_PARAMS)
trained_ddpg = agent.train_model(model=model_ddpg,
tb_log_name='ddpg',
total_timesteps=50000)
###Output
Logging to tensorboard_log/ddpg/ddpg_2
=================================
begin_total_asset:1000000
end_total_asset:4625995.900359718
Sharpe: 1.040202670783119
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 4 |
| fps | 22 |
| time_elapsed | 439 |
| total timesteps | 10064 |
| train/ | |
| actor_loss | -6.99e+07 |
| critic_loss | 7.27e+12 |
| learning_rate | 0.001 |
| n_updates | 7548 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 8 |
| fps | 20 |
| time_elapsed | 980 |
| total timesteps | 20128 |
| train/ | |
| actor_loss | -1.44e+08 |
| critic_loss | 1.81e+13 |
| learning_rate | 0.001 |
| n_updates | 17612 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 12 |
| fps | 19 |
| time_elapsed | 1542 |
| total timesteps | 30192 |
| train/ | |
| actor_loss | -1.88e+08 |
| critic_loss | 2.72e+13 |
| learning_rate | 0.001 |
| n_updates | 27676 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
----------------------------------
| time/ | |
| episodes | 16 |
| fps | 18 |
| time_elapsed | 2133 |
| total timesteps | 40256 |
| train/ | |
| actor_loss | -2.15e+08 |
| critic_loss | 3.45e+13 |
| learning_rate | 0.001 |
| n_updates | 37740 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
=================================
begin_total_asset:1000000
end_total_asset:4450723.86820311
Sharpe: 1.008267759668747
=================================
---------------------------------
| time/ | |
| episodes | 20 |
| fps | 17 |
| time_elapsed | 2874 |
| total timesteps | 50320 |
| train/ | |
| actor_loss | -2.3e+08 |
| critic_loss | 4.05e+13 |
| learning_rate | 0.001 |
| n_updates | 47804 |
---------------------------------
###Markdown
Model 4: **SAC**
###Code
agent = DRLAgent(env = env_train)
SAC_PARAMS = {
"batch_size": 128,
"buffer_size": 100000,
"learning_rate": 0.0003,
"learning_starts": 100,
"ent_coef": "auto_0.1",
}
model_sac = agent.get_model("sac",model_kwargs = SAC_PARAMS)
trained_sac = agent.train_model(model=model_sac,
tb_log_name='sac',
total_timesteps=50000)
###Output
Logging to tensorboard_log/sac/sac_1
=================================
begin_total_asset:1000000
end_total_asset:4449463.498168942
Sharpe: 1.01245667390232
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418643.239765096
Sharpe: 1.0135796594260282
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418644.1960784905
Sharpe: 1.0135797537524718
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418659.429680678
Sharpe: 1.013581852537709
=================================
----------------------------------
| time/ | |
| episodes | 4 |
| fps | 12 |
| time_elapsed | 783 |
| total timesteps | 10064 |
| train/ | |
| actor_loss | -8.83e+07 |
| critic_loss | 6.57e+12 |
| ent_coef | 2.24 |
| ent_coef_loss | -205 |
| learning_rate | 0.0003 |
| n_updates | 9963 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4418651.576406099
Sharpe: 1.013581224026754
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418670.948269031
Sharpe: 1.0135838030234754
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418682.278829884
Sharpe: 1.013585596968056
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418791.911955293
Sharpe: 1.0136007328171013
=================================
----------------------------------
| time/ | |
| episodes | 8 |
| fps | 12 |
| time_elapsed | 1585 |
| total timesteps | 20128 |
| train/ | |
| actor_loss | -1.51e+08 |
| critic_loss | 1.12e+13 |
| ent_coef | 41.7 |
| ent_coef_loss | -670 |
| learning_rate | 0.0003 |
| n_updates | 20027 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4418737.365107464
Sharpe: 1.0135970410224868
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418754.895735274
Sharpe: 1.0135965589029627
=================================
=================================
begin_total_asset:1000000
end_total_asset:4419325.814567342
Sharpe: 1.0136807224228588
=================================
=================================
begin_total_asset:1000000
end_total_asset:4418142.473513333
Sharpe: 1.0135234795926031
=================================
----------------------------------
| time/ | |
| episodes | 12 |
| fps | 12 |
| time_elapsed | 2400 |
| total timesteps | 30192 |
| train/ | |
| actor_loss | -1.85e+08 |
| critic_loss | 1.87e+13 |
| ent_coef | 725 |
| ent_coef_loss | -673 |
| learning_rate | 0.0003 |
| n_updates | 30091 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4422046.188863339
Sharpe: 1.0140936726052256
=================================
=================================
begin_total_asset:1000000
end_total_asset:4424919.463828854
Sharpe: 1.014521127041106
=================================
=================================
begin_total_asset:1000000
end_total_asset:4427483.152494239
Sharpe: 1.0148626804754584
=================================
=================================
begin_total_asset:1000000
end_total_asset:4460697.650185859
Sharpe: 1.019852362102548
=================================
----------------------------------
| time/ | |
| episodes | 16 |
| fps | 12 |
| time_elapsed | 3210 |
| total timesteps | 40256 |
| train/ | |
| actor_loss | -1.93e+08 |
| critic_loss | 1.62e+13 |
| ent_coef | 1.01e+04 |
| ent_coef_loss | -238 |
| learning_rate | 0.0003 |
| n_updates | 40155 |
----------------------------------
=================================
begin_total_asset:1000000
end_total_asset:4434035.982803257
Sharpe: 1.0161512551319891
=================================
=================================
begin_total_asset:1000000
end_total_asset:4454728.906041551
Sharpe: 1.018484863448905
=================================
=================================
begin_total_asset:1000000
end_total_asset:4475667.120269234
Sharpe: 1.0215545521682856
=================================
###Markdown
Trading
Assume that we have $1,000,000 initial capital at 2019-01-01. We use the DDPG model to trade Dow jones 30 stocks.
###Code
trade = data_split(df,'2019-01-01', '2021-01-01')
e_trade_gym = StockPortfolioEnv(df = trade, **env_kwargs)
trade.shape
df_daily_return, df_actions = DRLAgent.DRL_prediction(model=trained_a2c,
environment = e_trade_gym)
df_daily_return.head()
df_actions.head()
df_actions.to_csv('df_actions.csv')
###Output
_____no_output_____
###Markdown
Part 7: Backtest Our Strategy
Backtesting plays a key role in evaluating the performance of a trading strategy. Automated backtesting tool is preferred because it reduces the human error. We usually use the Quantopian pyfolio package to backtest our trading strategies. It is easy to use and consists of various individual plots that provide a comprehensive image of the performance of a trading strategy.
7.1 BackTestStats
pass in df_account_value, this information is stored in env class
###Code
from pyfolio import timeseries
DRL_strat = convert_daily_return_to_pyfolio_ts(df_daily_return)
perf_func = timeseries.perf_stats
perf_stats_all = perf_func( returns=DRL_strat,
factor_returns=DRL_strat,
positions=None, transactions=None, turnover_denom="AGB")
print("==============DRL Strategy Stats===========")
perf_stats_all
###Output
==============DRL Strategy Stats===========
###Markdown
7.2 BackTestPlot
###Code
import pyfolio
%matplotlib inline
baseline_df = get_baseline(
ticker='^DJI', start='2019-01-01', end='2021-01-01'
)
baseline_returns = get_daily_return(baseline_df, value_col_name="close")
with pyfolio.plotting.plotting_context(font_scale=1.1):
pyfolio.create_full_tear_sheet(returns = DRL_strat,
benchmark_rets=baseline_returns, set_context=False)
###Output
_____no_output_____
###Markdown
Deep Reinforcement Learning for Stock Trading from Scratch: Portfolio AllocationTutorials to use OpenAI DRL to perform portfolio allocation in one Jupyter Notebook | Presented at NeurIPS 2020: Deep RL Workshop* This blog is based on our paper: FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance, presented at NeurIPS 2020: Deep RL Workshop.* Check out medium blog for detailed explanations: * Please report any issues to our Github: https://github.com/AI4Finance-LLC/FinRL-Library/issues* **Pytorch Version** Content * [1. Problem Definition](0)* [2. Getting Started - Load Python packages](1) * [2.1. Install Packages](1.1) * [2.2. Check Additional Packages](1.2) * [2.3. Import Packages](1.3) * [2.4. Create Folders](1.4)* [3. Download Data](2)* [4. Preprocess Data](3) * [4.1. Technical Indicators](3.1) * [4.2. Perform Feature Engineering](3.2)* [5.Build Environment](4) * [5.1. Training & Trade Data Split](4.1) * [5.2. User-defined Environment](4.2) * [5.3. Initialize Environment](4.3) * [6.Implement DRL Algorithms](5) * [7.Backtesting Performance](6) * [7.1. BackTestStats](6.1) * [7.2. BackTestPlot](6.2) * [7.3. Baseline Stats](6.3) * [7.3. Compare to Stock Market Index](6.4) Part 1. Problem Definition This problem is to design an automated trading solution for single stock trading. We model the stock trading process as a Markov Decision Process (MDP). We then formulate our trading goal as a maximization problem.The algorithm is trained using Deep Reinforcement Learning (DRL) algorithms and the components of the reinforcement learning environment are:* Action: The action space describes the allowed actions that the agent interacts with theenvironment. Normally, a ∈ A includes three actions: a ∈ {−1, 0, 1}, where −1, 0, 1 representselling, holding, and buying one stock. Also, an action can be carried upon multiple shares. We usean action space {−k, ..., −1, 0, 1, ..., k}, where k denotes the number of shares. For example, "Buy10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or −10, respectively* Reward function: r(s, a, s′) is the incentive mechanism for an agent to learn a better action. The change of the portfolio value when action a is taken at state s and arriving at new state s', i.e., r(s, a, s′) = v′ − v, where v′ and v represent the portfoliovalues at state s′ and s, respectively* State: The state space describes the observations that the agent receives from the environment. Just as a human trader needs to analyze various information before executing a trade, soour trading agent observes many different features to better learn in an interactive environment.* Environment: Dow 30 consituentsThe data of the single stock that we will be using for this case study is obtained from Yahoo Finance API. The data contains Open-High-Low-Close price and volume. Part 2. Getting Started- Load Python Packages 2.1. Install all the packages through FinRL library
###Code
## install finrl library
!pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
###Output
Collecting git+https://github.com/AI4Finance-LLC/FinRL-Library.git
Cloning https://github.com/AI4Finance-LLC/FinRL-Library.git to /tmp/pip-req-build-q5i8wlg8
Running command git clone -q https://github.com/AI4Finance-LLC/FinRL-Library.git /tmp/pip-req-build-q5i8wlg8
Collecting pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2
Cloning https://github.com/quantopian/pyfolio.git to /tmp/pip-install-iklozlwf/pyfolio_8412840d4dbc46dbba3ea56f3f97f75c
Running command git clone -q https://github.com/quantopian/pyfolio.git /tmp/pip-install-iklozlwf/pyfolio_8412840d4dbc46dbba3ea56f3f97f75c
Requirement already satisfied: numpy>=1.17.3 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.1) (1.19.5)
Requirement already satisfied: pandas>=1.1.5 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.1) (1.1.5)
Collecting stockstats
Downloading stockstats-0.3.2-py2.py3-none-any.whl (13 kB)
Collecting yfinance
Downloading yfinance-0.1.63.tar.gz (26 kB)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.1) (3.2.2)
Requirement already satisfied: scikit-learn>=0.21.0 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.1) (0.22.2.post1)
Requirement already satisfied: gym>=0.17 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.1) (0.17.3)
Collecting stable-baselines3[extra]
Downloading stable_baselines3-1.1.0-py3-none-any.whl (172 kB)
[K |████████████████████████████████| 172 kB 7.0 MB/s
[?25hCollecting ray[default]
Downloading ray-1.6.0-cp37-cp37m-manylinux2014_x86_64.whl (49.6 MB)
[K |████████████████████████████████| 49.6 MB 6.2 kB/s
[?25hCollecting lz4
Downloading lz4-3.1.3-cp37-cp37m-manylinux2010_x86_64.whl (1.8 MB)
[K |████████████████████████████████| 1.8 MB 14.2 MB/s
[?25hCollecting tensorboardX
Downloading tensorboardX-2.4-py2.py3-none-any.whl (124 kB)
[K |████████████████████████████████| 124 kB 57.6 MB/s
[?25hCollecting gputil
Downloading GPUtil-1.4.0.tar.gz (5.5 kB)
Collecting trading_calendars
Downloading trading_calendars-2.1.1.tar.gz (108 kB)
[K |████████████████████████████████| 108 kB 54.7 MB/s
[?25hCollecting alpaca_trade_api
Downloading alpaca_trade_api-1.2.3-py3-none-any.whl (40 kB)
[K |████████████████████████████████| 40 kB 4.2 MB/s
[?25hCollecting ccxt
Downloading ccxt-1.55.84-py2.py3-none-any.whl (2.0 MB)
[K |████████████████████████████████| 2.0 MB 33.4 MB/s
[?25hCollecting jqdatasdk
Downloading jqdatasdk-1.8.10-py3-none-any.whl (153 kB)
[K |████████████████████████████████| 153 kB 59.2 MB/s
[?25hCollecting wrds
Downloading wrds-3.1.0-py3-none-any.whl (12 kB)
Requirement already satisfied: pytest in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.1) (3.6.4)
Requirement already satisfied: setuptools>=41.4.0 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.1) (57.4.0)
Requirement already satisfied: wheel>=0.33.6 in /usr/local/lib/python3.7/dist-packages (from finrl==0.3.1) (0.37.0)
Requirement already satisfied: ipython>=3.2.3 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (5.5.0)
Requirement already satisfied: pytz>=2014.10 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (2018.9)
Requirement already satisfied: scipy>=0.14.0 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (1.4.1)
Requirement already satisfied: seaborn>=0.7.1 in /usr/local/lib/python3.7/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (0.11.1)
Collecting empyrical>=0.5.0
Downloading empyrical-0.5.5.tar.gz (52 kB)
[K |████████████████████████████████| 52 kB 997 kB/s
[?25hRequirement already satisfied: pandas-datareader>=0.2 in /usr/local/lib/python3.7/dist-packages (from empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (0.9.0)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym>=0.17->finrl==0.3.1) (1.5.0)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym>=0.17->finrl==0.3.1) (1.3.0)
Requirement already satisfied: pexpect in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (4.8.0)
Requirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (4.4.2)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (0.8.1)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (1.0.18)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (5.0.5)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (0.7.5)
Requirement already satisfied: pygments in /usr/local/lib/python3.7/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (2.6.1)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.3.1) (1.3.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.3.1) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.3.1) (2.4.7)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->finrl==0.3.1) (2.8.2)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from cycler>=0.10->matplotlib->finrl==0.3.1) (1.15.0)
Requirement already satisfied: requests>=2.19.0 in /usr/local/lib/python3.7/dist-packages (from pandas-datareader>=0.2->empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (2.23.0)
Requirement already satisfied: lxml in /usr/local/lib/python3.7/dist-packages (from pandas-datareader>=0.2->empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (4.2.6)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (0.2.5)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym>=0.17->finrl==0.3.1) (0.16.0)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->pandas-datareader>=0.2->empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (2021.5.30)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->pandas-datareader>=0.2->empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->pandas-datareader>=0.2->empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->pandas-datareader>=0.2->empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (1.24.3)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.0->finrl==0.3.1) (1.0.1)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.7/dist-packages (from traitlets>=4.2->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (0.2.0)
Collecting websockets<10,>=8.0
Downloading websockets-9.1-cp37-cp37m-manylinux2010_x86_64.whl (103 kB)
[K |████████████████████████████████| 103 kB 27.2 MB/s
[?25hRequirement already satisfied: msgpack==1.0.2 in /usr/local/lib/python3.7/dist-packages (from alpaca_trade_api->finrl==0.3.1) (1.0.2)
Collecting websocket-client<2,>=0.56.0
Downloading websocket_client-1.2.1-py2.py3-none-any.whl (52 kB)
[K |████████████████████████████████| 52 kB 1.1 MB/s
[?25hCollecting aiodns>=1.1.1
Downloading aiodns-3.0.0-py3-none-any.whl (5.0 kB)
Collecting yarl==1.6.3
Downloading yarl-1.6.3-cp37-cp37m-manylinux2014_x86_64.whl (294 kB)
[K |████████████████████████████████| 294 kB 74.0 MB/s
[?25hCollecting aiohttp<3.8,>=3.7.4
Downloading aiohttp-3.7.4.post0-cp37-cp37m-manylinux2014_x86_64.whl (1.3 MB)
[K |████████████████████████████████| 1.3 MB 48.1 MB/s
[?25hCollecting cryptography>=2.6.1
Downloading cryptography-3.4.8-cp36-abi3-manylinux_2_24_x86_64.whl (3.0 MB)
[K |████████████████████████████████| 3.0 MB 41.2 MB/s
[?25hRequirement already satisfied: typing-extensions>=3.7.4 in /usr/local/lib/python3.7/dist-packages (from yarl==1.6.3->ccxt->finrl==0.3.1) (3.7.4.3)
Collecting multidict>=4.0
Downloading multidict-5.1.0-cp37-cp37m-manylinux2014_x86_64.whl (142 kB)
[K |████████████████████████████████| 142 kB 60.3 MB/s
[?25hCollecting pycares>=4.0.0
Downloading pycares-4.0.0-cp37-cp37m-manylinux2010_x86_64.whl (291 kB)
[K |████████████████████████████████| 291 kB 59.1 MB/s
[?25hRequirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp<3.8,>=3.7.4->ccxt->finrl==0.3.1) (21.2.0)
Collecting async-timeout<4.0,>=3.0
Downloading async_timeout-3.0.1-py3-none-any.whl (8.2 kB)
Requirement already satisfied: cffi>=1.12 in /usr/local/lib/python3.7/dist-packages (from cryptography>=2.6.1->ccxt->finrl==0.3.1) (1.14.6)
Requirement already satisfied: pycparser in /usr/local/lib/python3.7/dist-packages (from cffi>=1.12->cryptography>=2.6.1->ccxt->finrl==0.3.1) (2.20)
Requirement already satisfied: SQLAlchemy>=1.2.8 in /usr/local/lib/python3.7/dist-packages (from jqdatasdk->finrl==0.3.1) (1.4.22)
Collecting thriftpy2>=0.3.9
Downloading thriftpy2-0.4.14.tar.gz (361 kB)
[K |████████████████████████████████| 361 kB 55.6 MB/s
[?25hCollecting pymysql>=0.7.6
Downloading PyMySQL-1.0.2-py3-none-any.whl (43 kB)
[K |████████████████████████████████| 43 kB 1.9 MB/s
[?25hRequirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from SQLAlchemy>=1.2.8->jqdatasdk->finrl==0.3.1) (4.6.4)
Requirement already satisfied: greenlet!=0.4.17 in /usr/local/lib/python3.7/dist-packages (from SQLAlchemy>=1.2.8->jqdatasdk->finrl==0.3.1) (1.1.1)
Collecting ply<4.0,>=3.4
Downloading ply-3.11-py2.py3-none-any.whl (49 kB)
[K |████████████████████████████████| 49 kB 4.5 MB/s
[?25hRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->SQLAlchemy>=1.2.8->jqdatasdk->finrl==0.3.1) (3.5.0)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.7/dist-packages (from pexpect->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.3.1) (0.7.0)
Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.1) (1.4.0)
Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.1) (1.10.0)
Requirement already satisfied: pluggy<0.8,>=0.5 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.1) (0.7.1)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from pytest->finrl==0.3.1) (8.8.0)
Requirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from ray[default]->finrl==0.3.1) (3.0.12)
Collecting redis>=3.5.0
Downloading redis-3.5.3-py2.py3-none-any.whl (72 kB)
[K |████████████████████████████████| 72 kB 470 kB/s
[?25hRequirement already satisfied: click>=7.0 in /usr/local/lib/python3.7/dist-packages (from ray[default]->finrl==0.3.1) (7.1.2)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/dist-packages (from ray[default]->finrl==0.3.1) (3.13)
Requirement already satisfied: protobuf>=3.15.3 in /usr/local/lib/python3.7/dist-packages (from ray[default]->finrl==0.3.1) (3.17.3)
Requirement already satisfied: grpcio>=1.28.1 in /usr/local/lib/python3.7/dist-packages (from ray[default]->finrl==0.3.1) (1.39.0)
Collecting aiohttp-cors
Downloading aiohttp_cors-0.7.0-py3-none-any.whl (27 kB)
Requirement already satisfied: jsonschema in /usr/local/lib/python3.7/dist-packages (from ray[default]->finrl==0.3.1) (2.6.0)
Collecting colorful
Downloading colorful-0.5.4-py2.py3-none-any.whl (201 kB)
[K |████████████████████████████████| 201 kB 50.0 MB/s
[?25hCollecting aioredis<2
Downloading aioredis-1.3.1-py3-none-any.whl (65 kB)
[K |████████████████████████████████| 65 kB 3.5 MB/s
[?25hCollecting py-spy>=0.2.0
Downloading py_spy-0.3.8-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.whl (3.1 MB)
[K |████████████████████████████████| 3.1 MB 29.1 MB/s
[?25hCollecting gpustat
Downloading gpustat-0.6.0.tar.gz (78 kB)
[K |████████████████████████████████| 78 kB 5.7 MB/s
[?25hCollecting opencensus
Downloading opencensus-0.7.13-py2.py3-none-any.whl (127 kB)
[K |████████████████████████████████| 127 kB 44.2 MB/s
[?25hRequirement already satisfied: prometheus-client>=0.7.1 in /usr/local/lib/python3.7/dist-packages (from ray[default]->finrl==0.3.1) (0.11.0)
Collecting hiredis
Downloading hiredis-2.0.0-cp37-cp37m-manylinux2010_x86_64.whl (85 kB)
[K |████████████████████████████████| 85 kB 3.2 MB/s
[?25hRequirement already satisfied: nvidia-ml-py3>=7.352.0 in /usr/local/lib/python3.7/dist-packages (from gpustat->ray[default]->finrl==0.3.1) (7.352.0)
Requirement already satisfied: psutil in /usr/local/lib/python3.7/dist-packages (from gpustat->ray[default]->finrl==0.3.1) (5.4.8)
Collecting blessings>=1.6
Downloading blessings-1.7-py3-none-any.whl (18 kB)
Collecting opencensus-context==0.1.2
Downloading opencensus_context-0.1.2-py2.py3-none-any.whl (4.4 kB)
Requirement already satisfied: google-api-core<2.0.0,>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from opencensus->ray[default]->finrl==0.3.1) (1.26.3)
Requirement already satisfied: google-auth<2.0dev,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from google-api-core<2.0.0,>=1.0.0->opencensus->ray[default]->finrl==0.3.1) (1.34.0)
Requirement already satisfied: packaging>=14.3 in /usr/local/lib/python3.7/dist-packages (from google-api-core<2.0.0,>=1.0.0->opencensus->ray[default]->finrl==0.3.1) (21.0)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core<2.0.0,>=1.0.0->opencensus->ray[default]->finrl==0.3.1) (1.53.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2.0dev,>=1.21.1->google-api-core<2.0.0,>=1.0.0->opencensus->ray[default]->finrl==0.3.1) (0.2.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2.0dev,>=1.21.1->google-api-core<2.0.0,>=1.0.0->opencensus->ray[default]->finrl==0.3.1) (4.2.2)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<2.0dev,>=1.21.1->google-api-core<2.0.0,>=1.0.0->opencensus->ray[default]->finrl==0.3.1) (4.7.2)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2.0dev,>=1.21.1->google-api-core<2.0.0,>=1.0.0->opencensus->ray[default]->finrl==0.3.1) (0.4.8)
Requirement already satisfied: tabulate in /usr/local/lib/python3.7/dist-packages (from ray[default]->finrl==0.3.1) (0.8.9)
Requirement already satisfied: torch>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.1) (1.9.0+cu102)
Requirement already satisfied: opencv-python in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.1) (4.1.2.30)
Requirement already satisfied: atari-py~=0.2.0 in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.1) (0.2.9)
Requirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.1) (7.1.2)
Requirement already satisfied: tensorboard>=2.2.0 in /usr/local/lib/python3.7/dist-packages (from stable-baselines3[extra]->finrl==0.3.1) (2.6.0)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.1) (0.6.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.1) (0.4.5)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.1) (1.0.1)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.1) (0.12.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.1) (1.8.0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.1) (3.3.4)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.1) (1.3.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.2.0->stable-baselines3[extra]->finrl==0.3.1) (3.1.1)
Collecting int-date>=0.1.7
Downloading int_date-0.1.8-py2.py3-none-any.whl (5.0 kB)
Requirement already satisfied: toolz in /usr/local/lib/python3.7/dist-packages (from trading_calendars->finrl==0.3.1) (0.11.1)
Collecting mock
Downloading mock-4.0.3-py3-none-any.whl (28 kB)
Collecting psycopg2-binary
Downloading psycopg2_binary-2.9.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.4 MB)
[K |████████████████████████████████| 3.4 MB 15.0 MB/s
[?25hRequirement already satisfied: multitasking>=0.0.7 in /usr/local/lib/python3.7/dist-packages (from yfinance->finrl==0.3.1) (0.0.9)
Collecting lxml
Downloading lxml-4.6.3-cp37-cp37m-manylinux2014_x86_64.whl (6.3 MB)
[K |████████████████████████████████| 6.3 MB 24.1 MB/s
[?25hBuilding wheels for collected packages: finrl, pyfolio, empyrical, gputil, thriftpy2, gpustat, trading-calendars, yfinance
Building wheel for finrl (setup.py) ... [?25l[?25hdone
Created wheel for finrl: filename=finrl-0.3.1-py3-none-any.whl size=2732514 sha256=d5c403cd1a2d73433fa18f96b6f1438cadf253439f91c9f5b5617c78ce1c2a3b
Stored in directory: /tmp/pip-ephem-wheel-cache-r3pz55z1/wheels/17/ff/bd/1bc602a0352762b0b24041b88536d803ae343ed0a711fcf55e
Building wheel for pyfolio (setup.py) ... [?25l[?25hdone
Created wheel for pyfolio: filename=pyfolio-0.9.2+75.g4b901f6-py3-none-any.whl size=75775 sha256=4f0a0f7e2e86f37fe4225dd4bedf4c51ca8e960932a1ac3beaedd71a471dbd31
Stored in directory: /tmp/pip-ephem-wheel-cache-r3pz55z1/wheels/ef/09/e5/2c1bf37c050d22557c080deb1be986d06424627c04aeca19b9
Building wheel for empyrical (setup.py) ... [?25l[?25hdone
Created wheel for empyrical: filename=empyrical-0.5.5-py3-none-any.whl size=39777 sha256=85f07abd5a7a81461847f4ca81c5b8883a0cdbc26a6d84812560dea2b2d19f9c
Stored in directory: /root/.cache/pip/wheels/d9/91/4b/654fcff57477efcf149eaca236da2fce991526cbab431bf312
Building wheel for gputil (setup.py) ... [?25l[?25hdone
Created wheel for gputil: filename=GPUtil-1.4.0-py3-none-any.whl size=7411 sha256=b7a395c7857c2b0ac946ef42f1ab8d1c9f75b4768ceb134a335e72a553f5d13d
Stored in directory: /root/.cache/pip/wheels/6e/f8/83/534c52482d6da64622ddbf72cd93c35d2ef2881b78fd08ff0c
Building wheel for thriftpy2 (setup.py) ... [?25l[?25hdone
Created wheel for thriftpy2: filename=thriftpy2-0.4.14-cp37-cp37m-linux_x86_64.whl size=940419 sha256=4d4537cb53aafbb7700e1b19eaa60160d03da34b1f247d788ef1b0eec1778684
Stored in directory: /root/.cache/pip/wheels/2a/f5/49/9c0d851aa64b58db72883cf9393cc824d536bdf13f5c83cff4
Building wheel for gpustat (setup.py) ... [?25l[?25hdone
Created wheel for gpustat: filename=gpustat-0.6.0-py3-none-any.whl size=12617 sha256=16454a0926f4a3c029336ea46335507b8375d77be3cc9247a2d249b76cc2440b
Stored in directory: /root/.cache/pip/wheels/e6/67/af/f1ad15974b8fd95f59a63dbf854483ebe5c7a46a93930798b8
Building wheel for trading-calendars (setup.py) ... [?25l[?25hdone
Created wheel for trading-calendars: filename=trading_calendars-2.1.1-py3-none-any.whl size=140937 sha256=c57b62f8097002d6c11b6e7b62a335da6967bafcc73e329b2b3369fae1bf01fa
Stored in directory: /root/.cache/pip/wheels/62/9c/d1/46a21e1b99e064cba79b85e9f95e6a208ac5ba4c29ae5962ec
Building wheel for yfinance (setup.py) ... [?25l[?25hdone
Created wheel for yfinance: filename=yfinance-0.1.63-py2.py3-none-any.whl size=23918 sha256=715114c7d53ca17cac554d1865cae128d831dcf4adb03a8bb4e875aa6297a36d
Stored in directory: /root/.cache/pip/wheels/fe/87/8b/7ec24486e001d3926537f5f7801f57a74d181be25b11157983
Successfully built finrl pyfolio empyrical gputil thriftpy2 gpustat trading-calendars yfinance
Installing collected packages: multidict, yarl, lxml, async-timeout, redis, pycares, ply, opencensus-context, hiredis, blessings, aiohttp, websockets, websocket-client, thriftpy2, tensorboardX, stable-baselines3, ray, pymysql, py-spy, psycopg2-binary, opencensus, mock, int-date, gpustat, empyrical, cryptography, colorful, aioredis, aiohttp-cors, aiodns, yfinance, wrds, trading-calendars, stockstats, pyfolio, lz4, jqdatasdk, gputil, ccxt, alpaca-trade-api, finrl
Attempting uninstall: lxml
Found existing installation: lxml 4.2.6
Uninstalling lxml-4.2.6:
Successfully uninstalled lxml-4.2.6
Successfully installed aiodns-3.0.0 aiohttp-3.7.4.post0 aiohttp-cors-0.7.0 aioredis-1.3.1 alpaca-trade-api-1.2.3 async-timeout-3.0.1 blessings-1.7 ccxt-1.55.84 colorful-0.5.4 cryptography-3.4.8 empyrical-0.5.5 finrl-0.3.1 gpustat-0.6.0 gputil-1.4.0 hiredis-2.0.0 int-date-0.1.8 jqdatasdk-1.8.10 lxml-4.6.3 lz4-3.1.3 mock-4.0.3 multidict-5.1.0 opencensus-0.7.13 opencensus-context-0.1.2 ply-3.11 psycopg2-binary-2.9.1 py-spy-0.3.8 pycares-4.0.0 pyfolio-0.9.2+75.g4b901f6 pymysql-1.0.2 ray-1.6.0 redis-3.5.3 stable-baselines3-1.1.0 stockstats-0.3.2 tensorboardX-2.4 thriftpy2-0.4.14 trading-calendars-2.1.1 websocket-client-1.2.1 websockets-9.1 wrds-3.1.0 yarl-1.6.3 yfinance-0.1.63
###Markdown
2.2. Check if the additional packages needed are present, if not install them. * Yahoo Finance API* pandas* numpy* matplotlib* stockstats* OpenAI gym* stable-baselines* tensorflow* pyfolio 2.3. Import Packages
###Code
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.use('Agg')
%matplotlib inline
import datetime
from finrl.apps import config
from finrl.neo_finrl.preprocessor.yahoodownloader import YahooDownloader
from finrl.neo_finrl.preprocessor.preprocessors import FeatureEngineer, data_split
from finrl.neo_finrl.env_portfolio_allocation.env_portfolio import StockPortfolioEnv
from finrl.drl_agents.stablebaselines3.models import DRLAgent
from finrl.plot import backtest_stats, backtest_plot, get_daily_return, get_baseline,convert_daily_return_to_pyfolio_ts
import sys
sys.path.append("../FinRL-Library")
###Output
/usr/local/lib/python3.7/dist-packages/pyfolio/pos.py:27: UserWarning: Module "zipline.assets" not found; multipliers will not be applied to position notionals.
'Module "zipline.assets" not found; multipliers will not be applied'
###Markdown
2.4. Create Folders
###Code
import os
if not os.path.exists("./" + config.DATA_SAVE_DIR):
os.makedirs("./" + config.DATA_SAVE_DIR)
if not os.path.exists("./" + config.TRAINED_MODEL_DIR):
os.makedirs("./" + config.TRAINED_MODEL_DIR)
if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR):
os.makedirs("./" + config.TENSORBOARD_LOG_DIR)
if not os.path.exists("./" + config.RESULTS_DIR):
os.makedirs("./" + config.RESULTS_DIR)
###Output
_____no_output_____
###Markdown
Part 3. Download DataYahoo Finance is a website that provides stock data, financial news, financial reports, etc. All the data provided by Yahoo Finance is free.* FinRL uses a class **YahooDownloader** to fetch data from Yahoo Finance API* Call Limit: Using the Public API (without authentication), you are limited to 2,000 requests per hour per IP (or up to a total of 48,000 requests a day).
###Code
print(config.DOW_30_TICKER)
df = YahooDownloader(start_date = '2008-01-01',
end_date = '2021-07-01',
ticker_list = config.DOW_30_TICKER).fetch_data()
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Part 4: Preprocess DataData preprocessing is a crucial step for training a high quality machine learning model. We need to check for missing data and do feature engineering in order to convert the data into a model-ready state.* Add technical indicators. In practical trading, various information needs to be taken into account, for example the historical stock prices, current holding shares, technical indicators, etc. In this article, we demonstrate two trend-following technical indicators: MACD and RSI.* Add turbulence index. Risk-aversion reflects whether an investor will choose to preserve the capital. It also influences one's trading strategy when facing different market volatility level. To control the risk in a worst-case scenario, such as financial crisis of 2007–2008, FinRL employs the financial turbulence index that measures extreme asset price fluctuation.
###Code
fe = FeatureEngineer(
use_technical_indicator=True,
use_turbulence=False,
user_defined_feature = False)
df = fe.preprocess_data(df)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Add covariance matrix as states
###Code
# add covariance matrix as states
df=df.sort_values(['date','tic'],ignore_index=True)
df.index = df.date.factorize()[0]
cov_list = []
return_list = []
# look back is one year
lookback=252
for i in range(lookback,len(df.index.unique())):
data_lookback = df.loc[i-lookback:i,:]
price_lookback=data_lookback.pivot_table(index = 'date',columns = 'tic', values = 'close')
return_lookback = price_lookback.pct_change().dropna()
return_list.append(return_lookback)
covs = return_lookback.cov().values
cov_list.append(covs)
df_cov = pd.DataFrame({'date':df.date.unique()[lookback:],'cov_list':cov_list,'return_list':return_list})
df = df.merge(df_cov, on='date')
df = df.sort_values(['date','tic']).reset_index(drop=True)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Part 5. Design EnvironmentConsidering the stochastic and interactive nature of the automated stock trading tasks, a financial task is modeled as a **Markov Decision Process (MDP)** problem. The training process involves observing stock price change, taking an action and reward's calculation to have the agent adjusting its strategy accordingly. By interacting with the environment, the trading agent will derive a trading strategy with the maximized rewards as time proceeds.Our trading environments, based on OpenAI Gym framework, simulate live stock markets with real market data according to the principle of time-driven simulation.The action space describes the allowed actions that the agent interacts with the environment. Normally, action a includes three actions: {-1, 0, 1}, where -1, 0, 1 represent selling, holding, and buying one share. Also, an action can be carried upon multiple shares. We use an action space {-k,…,-1, 0, 1, …, k}, where k denotes the number of shares to buy and -k denotes the number of shares to sell. For example, "Buy 10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or -10, respectively. The continuous action space needs to be normalized to [-1, 1], since the policy is defined on a Gaussian distribution, which needs to be normalized and symmetric. Training data split: 2009-01-01 to 2018-12-31
###Code
train = data_split(df, '2009-01-01','2020-07-01')
#trade = data_split(df, '2020-01-01', config.END_DATE)
train.head()
###Output
_____no_output_____
###Markdown
Environment for Portfolio Allocation
###Code
import numpy as np
import pandas as pd
from gym.utils import seeding
import gym
from gym import spaces
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from stable_baselines3.common.vec_env import DummyVecEnv
class StockPortfolioEnv(gym.Env):
"""A single stock trading environment for OpenAI gym
Attributes
----------
df: DataFrame
input data
stock_dim : int
number of unique stocks
hmax : int
maximum number of shares to trade
initial_amount : int
start money
transaction_cost_pct: float
transaction cost percentage per trade
reward_scaling: float
scaling factor for reward, good for training
state_space: int
the dimension of input features
action_space: int
equals stock dimension
tech_indicator_list: list
a list of technical indicator names
turbulence_threshold: int
a threshold to control risk aversion
day: int
an increment number to control date
Methods
-------
_sell_stock()
perform sell action based on the sign of the action
_buy_stock()
perform buy action based on the sign of the action
step()
at each step the agent will return actions, then
we will calculate the reward, and return the next observation.
reset()
reset the environment
render()
use render to return other functions
save_asset_memory()
return account value at each time step
save_action_memory()
return actions/positions at each time step
"""
metadata = {'render.modes': ['human']}
def __init__(self,
df,
stock_dim,
hmax,
initial_amount,
transaction_cost_pct,
reward_scaling,
state_space,
action_space,
tech_indicator_list,
turbulence_threshold=None,
lookback=252,
day = 0):
#super(StockEnv, self).__init__()
#money = 10 , scope = 1
self.day = day
self.lookback=lookback
self.df = df
self.stock_dim = stock_dim
self.hmax = hmax
self.initial_amount = initial_amount
self.transaction_cost_pct =transaction_cost_pct
self.reward_scaling = reward_scaling
self.state_space = state_space
self.action_space = action_space
self.tech_indicator_list = tech_indicator_list
# action_space normalization and shape is self.stock_dim
self.action_space = spaces.Box(low = 0, high = 1,shape = (self.action_space,))
# Shape = (34, 30)
# covariance matrix + technical indicators
self.observation_space = spaces.Box(low=-np.inf, high=np.inf, shape = (self.state_space+len(self.tech_indicator_list),self.state_space))
# load data from a pandas dataframe
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.terminal = False
self.turbulence_threshold = turbulence_threshold
# initalize state: inital portfolio return + individual stock return + individual weights
self.portfolio_value = self.initial_amount
# memorize portfolio value each step
self.asset_memory = [self.initial_amount]
# memorize portfolio return each step
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
def step(self, actions):
# print(self.day)
self.terminal = self.day >= len(self.df.index.unique())-1
# print(actions)
if self.terminal:
df = pd.DataFrame(self.portfolio_return_memory)
df.columns = ['daily_return']
plt.plot(df.daily_return.cumsum(),'r')
plt.savefig('results/cumulative_reward.png')
plt.close()
plt.plot(self.portfolio_return_memory,'r')
plt.savefig('results/rewards.png')
plt.close()
print("=================================")
print("begin_total_asset:{}".format(self.asset_memory[0]))
print("end_total_asset:{}".format(self.portfolio_value))
df_daily_return = pd.DataFrame(self.portfolio_return_memory)
df_daily_return.columns = ['daily_return']
if df_daily_return['daily_return'].std() !=0:
sharpe = (252**0.5)*df_daily_return['daily_return'].mean()/ \
df_daily_return['daily_return'].std()
print("Sharpe: ",sharpe)
print("=================================")
return self.state, self.reward, self.terminal,{}
else:
#print("Model actions: ",actions)
# actions are the portfolio weight
# normalize to sum of 1
#if (np.array(actions) - np.array(actions).min()).sum() != 0:
# norm_actions = (np.array(actions) - np.array(actions).min()) / (np.array(actions) - np.array(actions).min()).sum()
#else:
# norm_actions = actions
weights = self.softmax_normalization(actions)
#print("Normalized actions: ", weights)
self.actions_memory.append(weights)
last_day_memory = self.data
#load next state
self.day += 1
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
#print(self.state)
# calcualte portfolio return
# individual stocks' return * weight
portfolio_return = sum(((self.data.close.values / last_day_memory.close.values)-1)*weights)
# update portfolio value
new_portfolio_value = self.portfolio_value*(1+portfolio_return)
self.portfolio_value = new_portfolio_value
# save into memory
self.portfolio_return_memory.append(portfolio_return)
self.date_memory.append(self.data.date.unique()[0])
self.asset_memory.append(new_portfolio_value)
# the reward is the new portfolio value or end portfolo value
self.reward = new_portfolio_value
#print("Step reward: ", self.reward)
#self.reward = self.reward*self.reward_scaling
return self.state, self.reward, self.terminal, {}
def reset(self):
self.asset_memory = [self.initial_amount]
self.day = 0
self.data = self.df.loc[self.day,:]
# load states
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.portfolio_value = self.initial_amount
#self.cost = 0
#self.trades = 0
self.terminal = False
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
return self.state
def render(self, mode='human'):
return self.state
def softmax_normalization(self, actions):
numerator = np.exp(actions)
denominator = np.sum(np.exp(actions))
softmax_output = numerator/denominator
return softmax_output
def save_asset_memory(self):
date_list = self.date_memory
portfolio_return = self.portfolio_return_memory
#print(len(date_list))
#print(len(asset_list))
df_account_value = pd.DataFrame({'date':date_list,'daily_return':portfolio_return})
return df_account_value
def save_action_memory(self):
# date and close price length must match actions length
date_list = self.date_memory
df_date = pd.DataFrame(date_list)
df_date.columns = ['date']
action_list = self.actions_memory
df_actions = pd.DataFrame(action_list)
df_actions.columns = self.data.tic.values
df_actions.index = df_date.date
#df_actions = pd.DataFrame({'date':date_list,'actions':action_list})
return df_actions
def _seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def get_sb_env(self):
e = DummyVecEnv([lambda: self])
obs = e.reset()
return e, obs
stock_dimension = len(train.tic.unique())
state_space = stock_dimension
print(f"Stock Dimension: {stock_dimension}, State Space: {state_space}")
env_kwargs = {
"hmax": 100,
"initial_amount": 1000000,
"transaction_cost_pct": 0.001,
"state_space": state_space,
"stock_dim": stock_dimension,
"tech_indicator_list": config.TECHNICAL_INDICATORS_LIST,
"action_space": stock_dimension,
"reward_scaling": 1e-4
}
e_train_gym = StockPortfolioEnv(df = train, **env_kwargs)
env_train, _ = e_train_gym.get_sb_env()
print(type(env_train))
###Output
<class 'stable_baselines3.common.vec_env.dummy_vec_env.DummyVecEnv'>
###Markdown
Part 6: Implement DRL Algorithms* The implementation of the DRL algorithms are based on **OpenAI Baselines** and **Stable Baselines**. Stable Baselines is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups.* FinRL library includes fine-tuned standard DRL algorithms, such as DQN, DDPG,Multi-Agent DDPG, PPO, SAC, A2C and TD3. We also allow users todesign their own DRL algorithms by adapting these DRL algorithms.
###Code
# initialize
agent = DRLAgent(env = env_train)
###Output
_____no_output_____
###Markdown
Model 1: **A2C**
###Code
agent = DRLAgent(env = env_train)
A2C_PARAMS = {"n_steps": 5, "ent_coef": 0.005, "learning_rate": 0.0002}
model_a2c = agent.get_model(model_name="a2c",model_kwargs = A2C_PARAMS)
trained_a2c = agent.train_model(model=model_a2c,
tb_log_name='a2c',
total_timesteps=50000)
trained_a2c.save('/content/trained_models/trained_a2c.zip')
###Output
_____no_output_____
###Markdown
Model 2: **PPO**
###Code
agent = DRLAgent(env = env_train)
PPO_PARAMS = {
"n_steps": 2048,
"ent_coef": 0.005,
"learning_rate": 0.0001,
"batch_size": 128,
}
model_ppo = agent.get_model("ppo",model_kwargs = PPO_PARAMS)
trained_ppo = agent.train_model(model=model_ppo,
tb_log_name='ppo',
total_timesteps=80000)
trained_ppo.save('/content/trained_models/trained_ppo.zip')
###Output
_____no_output_____
###Markdown
Model 3: **DDPG**
###Code
agent = DRLAgent(env = env_train)
DDPG_PARAMS = {"batch_size": 128, "buffer_size": 50000, "learning_rate": 0.001}
model_ddpg = agent.get_model("ddpg",model_kwargs = DDPG_PARAMS)
trained_ddpg = agent.train_model(model=model_ddpg,
tb_log_name='ddpg',
total_timesteps=50000)
trained_ddpg.save('/content/trained_models/trained_ddpg.zip')
###Output
_____no_output_____
###Markdown
Model 4: **SAC**
###Code
agent = DRLAgent(env = env_train)
SAC_PARAMS = {
"batch_size": 128,
"buffer_size": 100000,
"learning_rate": 0.0003,
"learning_starts": 100,
"ent_coef": "auto_0.1",
}
model_sac = agent.get_model("sac",model_kwargs = SAC_PARAMS)
trained_sac = agent.train_model(model=model_sac,
tb_log_name='sac',
total_timesteps=50000)
trained_sac.save('/content/trained_models/trained_sac.zip')
###Output
_____no_output_____
###Markdown
Model 5: **TD3**
###Code
agent = DRLAgent(env = env_train)
TD3_PARAMS = {"batch_size": 100,
"buffer_size": 1000000,
"learning_rate": 0.001}
model_td3 = agent.get_model("td3",model_kwargs = TD3_PARAMS)
trained_td3 = agent.train_model(model=model_td3,
tb_log_name='td3',
total_timesteps=30000)
trained_td3.save('/content/trained_models/trained_td3.zip')
###Output
_____no_output_____
###Markdown
TradingAssume that we have $1,000,000 initial capital at 2019-01-01. We use the DDPG model to trade Dow jones 30 stocks.
###Code
trade = data_split(df,'2020-07-01', '2021-07-01')
e_trade_gym = StockPortfolioEnv(df = trade, **env_kwargs)
trade.shape
df_daily_return, df_actions = DRLAgent.DRL_prediction(model=trained_a2c,
environment = e_trade_gym)
df_daily_return.head()
df_daily_return.to_csv('df_daily_return.csv')
df_actions.head()
df_actions.to_csv('df_actions.csv')
###Output
_____no_output_____
###Markdown
Part 7: Backtest Our StrategyBacktesting plays a key role in evaluating the performance of a trading strategy. Automated backtesting tool is preferred because it reduces the human error. We usually use the Quantopian pyfolio package to backtest our trading strategies. It is easy to use and consists of various individual plots that provide a comprehensive image of the performance of a trading strategy. 7.1 BackTestStatspass in df_account_value, this information is stored in env class
###Code
from pyfolio import timeseries
DRL_strat = convert_daily_return_to_pyfolio_ts(df_daily_return)
perf_func = timeseries.perf_stats
perf_stats_all = perf_func( returns=DRL_strat,
factor_returns=DRL_strat,
positions=None, transactions=None, turnover_denom="AGB")
print("==============DRL Strategy Stats===========")
perf_stats_all
#baseline stats
print("==============Get Baseline Stats===========")
baseline_df = get_baseline(
ticker="^DJI",
start = df_daily_return.loc[0,'date'],
end = df_daily_return.loc[len(df_daily_return)-1,'date'])
stats = backtest_stats(baseline_df, value_col_name = 'close')
###Output
==============Get Baseline Stats===========
[*********************100%***********************] 1 of 1 completed
Shape of DataFrame: (251, 8)
Annual return 0.334042
Cumulative returns 0.332517
Annual volatility 0.146033
Sharpe ratio 2.055458
Calmar ratio 3.740347
Stability 0.945402
Max drawdown -0.089308
Omega ratio 1.408111
Sortino ratio 3.075978
Skew NaN
Kurtosis NaN
Tail ratio 1.078766
Daily value at risk -0.017207
dtype: float64
###Markdown
7.2 BackTestPlot
###Code
import pyfolio
%matplotlib inline
baseline_df = get_baseline(
ticker='^DJI', start=df_daily_return.loc[0,'date'], end='2021-07-01'
)
baseline_returns = get_daily_return(baseline_df, value_col_name="close")
with pyfolio.plotting.plotting_context(font_scale=1.1):
pyfolio.create_full_tear_sheet(returns = DRL_strat,
benchmark_rets=baseline_returns, set_context=False)
###Output
_____no_output_____
###Markdown
Min-Variance Portfolio Allocation
###Code
!pip install PyPortfolioOpt
from pypfopt.efficient_frontier import EfficientFrontier
from pypfopt import risk_models
unique_tic = trade.tic.unique()
unique_trade_date = trade.date.unique()
df.head()
#calculate_portfolio_minimum_variance
portfolio = pd.DataFrame(index = range(1), columns = unique_trade_date)
initial_capital = 1000000
portfolio.loc[0,unique_trade_date[0]] = initial_capital
for i in range(len( unique_trade_date)-1):
df_temp = df[df.date==unique_trade_date[i]].reset_index(drop=True)
df_temp_next = df[df.date==unique_trade_date[i+1]].reset_index(drop=True)
#Sigma = risk_models.sample_cov(df_temp.return_list[0])
#calculate covariance matrix
Sigma = df_temp.return_list[0].cov()
#portfolio allocation
ef_min_var = EfficientFrontier(None, Sigma,weight_bounds=(0, 0.1))
#minimum variance
raw_weights_min_var = ef_min_var.min_volatility()
#get weights
cleaned_weights_min_var = ef_min_var.clean_weights()
#current capital
cap = portfolio.iloc[0, i]
#current cash invested for each stock
current_cash = [element * cap for element in list(cleaned_weights_min_var.values())]
# current held shares
current_shares = list(np.array(current_cash)
/ np.array(df_temp.close))
# next time period price
next_price = np.array(df_temp_next.close)
##next_price * current share to calculate next total account value
portfolio.iloc[0, i+1] = np.dot(current_shares, next_price)
portfolio=portfolio.T
portfolio.columns = ['account_value']
portfolio.head()
a2c_cumpod =(df_daily_return.daily_return+1).cumprod()-1
min_var_cumpod =(portfolio.account_value.pct_change()+1).cumprod()-1
dji_cumpod =(baseline_returns+1).cumprod()-1
###Output
_____no_output_____
###Markdown
Plotly: DRL, Min-Variance, DJIA
###Code
from datetime import datetime as dt
import matplotlib.pyplot as plt
import plotly
import plotly.graph_objs as go
time_ind = pd.Series(df_daily_return.date)
trace0_portfolio = go.Scatter(x = time_ind, y = a2c_cumpod, mode = 'lines', name = 'A2C (Portfolio Allocation)')
trace1_portfolio = go.Scatter(x = time_ind, y = dji_cumpod, mode = 'lines', name = 'DJIA')
trace2_portfolio = go.Scatter(x = time_ind, y = min_var_cumpod, mode = 'lines', name = 'Min-Variance')
#trace3_portfolio = go.Scatter(x = time_ind, y = ddpg_cumpod, mode = 'lines', name = 'DDPG')
#trace4_portfolio = go.Scatter(x = time_ind, y = addpg_cumpod, mode = 'lines', name = 'Adaptive-DDPG')
#trace5_portfolio = go.Scatter(x = time_ind, y = min_cumpod, mode = 'lines', name = 'Min-Variance')
#trace4 = go.Scatter(x = time_ind, y = addpg_cumpod, mode = 'lines', name = 'Adaptive-DDPG')
#trace2 = go.Scatter(x = time_ind, y = portfolio_cost_minv, mode = 'lines', name = 'Min-Variance')
#trace3 = go.Scatter(x = time_ind, y = spx_value, mode = 'lines', name = 'SPX')
fig = go.Figure()
fig.add_trace(trace0_portfolio)
fig.add_trace(trace1_portfolio)
fig.add_trace(trace2_portfolio)
fig.update_layout(
legend=dict(
x=0,
y=1,
traceorder="normal",
font=dict(
family="sans-serif",
size=15,
color="black"
),
bgcolor="White",
bordercolor="white",
borderwidth=2
),
)
#fig.update_layout(legend_orientation="h")
fig.update_layout(title={
#'text': "Cumulative Return using FinRL",
'y':0.85,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'})
#with Transaction cost
#fig.update_layout(title = 'Quarterly Trade Date')
fig.update_layout(
# margin=dict(l=20, r=20, t=20, b=20),
paper_bgcolor='rgba(1,1,0,0)',
plot_bgcolor='rgba(1, 1, 0, 0)',
#xaxis_title="Date",
yaxis_title="Cumulative Return",
xaxis={'type': 'date',
'tick0': time_ind[0],
'tickmode': 'linear',
'dtick': 86400000.0 *80}
)
fig.update_xaxes(showline=True,linecolor='black',showgrid=True, gridwidth=1, gridcolor='LightSteelBlue',mirror=True)
fig.update_yaxes(showline=True,linecolor='black',showgrid=True, gridwidth=1, gridcolor='LightSteelBlue',mirror=True)
fig.update_yaxes(zeroline=True, zerolinewidth=1, zerolinecolor='LightSteelBlue')
fig.show()
###Output
_____no_output_____ |
python3/notebooks/amazon-reviews/.ipynb_checkpoints/sample-and-join-checkpoint.ipynb | ###Markdown
sample the data so that it looks like this:TEXT | product categories | rating given
###Code
root = "/media/felipe/SAMSUNG/AmazonReviews/"
reviews_df = pd.read_json(root+"/sample_reviews_Books_5.json",lines=True)
reviews_df.head(1)
reviews_df = reviews_df[['asin','overall','reviewText']]
reviews_df.index = reviews_df['asin']
reviews_df.drop(['asin'],axis=1,inplace=True)
reviews_df['categories'] = np.nan
reviews_df.head()
with open(root+"/metadata.json") as f:
for line in tqdm(f, total=get_num_lines(root+"/metadata.json")):
json_data = ast.literal_eval(line)
other_df = json_normalize(json_data)
other_df['asin'] = other_df['asin'].astype('object')
other_df.index = other_df['asin']
other_df.drop(['asin'],axis=1,inplace=True)
if not 'categories' in other_df.columns.values:
other_df['categories'] = ''
reviews_df.update(other_df)
reviews_df
sample_metadata_df = pd.read_json(root+"/sample_metadata.json",lines=True)
###Output
_____no_output_____ |
COM466_Fuzzy_Logic_Project.ipynb | ###Markdown
Fuzzy Logic ProjectCredit decision system
###Code
# needed libraries
import numpy as np
import skfuzzy as fuzz
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Fuzzy Input Set Ranges
###Code
# ranges of the sets
x_house_market = np.arange(0, 1000, 1) # Market value $ x10^3
x_house_location = np.arange(0, 10, .01) # location of house
x_person_asset = np.arange(0,1000, 1) # Asset $ x10^3
x_person_income = np.arange(0,100, .1) # income $ x10^3
x_interest = np.arange(0, 10, .01) # Interest %
###Output
_____no_output_____
###Markdown
Defining the Fuzzy Inputs Sets
###Code
# house market value sets
market_low = fuzz.trapmf(x_house_market, [0, 0, 50, 100])
market_medium = fuzz.trapmf(x_house_market, [50, 100, 200, 250])
market_high = fuzz.trapmf(x_house_market, [200, 300, 650, 850])
market_very_high = fuzz.trapmf(x_house_market, [650, 850, 1000, 1000])
# house location sets
location_bad = fuzz.trapmf(x_house_location, [0, 0, 1.5, 4])
location_fair = fuzz.trapmf(x_house_location, [2.5, 5, 6, 8.5])
location_excellent = fuzz.trapmf(x_house_location, [6, 8.5, 10, 10])
# person asset sets
p_asset_low = fuzz.trimf(x_person_asset, [0, 0, 150])
p_asset_medium = fuzz.trapmf(x_person_asset, [50, 250, 500, 650])
p_asset_high = fuzz.trapmf(x_person_asset, [500, 700, 1000, 1000])
# person income sets
p_income_low = fuzz.trapmf(x_person_income, [0, 0, 10, 25])
p_income_medium = fuzz.trimf(x_person_income, [15, 35, 55])
p_income_high = fuzz.trimf(x_person_income, [40, 60, 80])
p_income_very_high = fuzz.trapmf(x_person_income, [60, 80, 100, 100])
# interest sets
b_interest_low = fuzz.trapmf(x_interest, [0, 0, 2, 5])
b_interest_medium = fuzz.trapmf(x_interest, [2, 4, 6, 8])
b_interest_high = fuzz.trapmf(x_interest, [6, 8.5, 10, 10])
###Output
_____no_output_____
###Markdown
Showing the Defined Fuzzy Inputs
###Code
plt.rcParams["figure.figsize"] = 15, 20
# house market value
plt.subplot(5,1,1), plt.plot(x_house_market, market_low, 'b', linewidth=1.5, label='Low')
plt.subplot(5,1,1), plt.plot(x_house_market, market_medium, 'g', linewidth=1.5, label='Medium')
plt.subplot(5,1,1), plt.plot(x_house_market, market_high, 'r', linewidth=1.5, label='High')
plt.subplot(5,1,1), plt.plot(x_house_market, market_very_high, 'y', linewidth=1.5, label='Very High'),plt.title("House Market Value $ x10^3")
plt.legend()
# house location
plt.subplot(5,1,2), plt.plot(x_house_location, location_bad, 'b', linewidth=1.5, label='Bad')
plt.subplot(5,1,2), plt.plot(x_house_location, location_fair, 'g', linewidth=1.5, label='Fair')
plt.subplot(5,1,2), plt.plot(x_house_location, location_excellent, 'r', linewidth=1.5, label='Excellent'),plt.title("House Location")
plt.legend()
# person Assets
plt.subplot(5,1,3), plt.plot(x_person_asset, p_asset_low, 'b', linewidth=1.5, label='Low')
plt.subplot(5,1,3), plt.plot(x_person_asset, p_asset_medium, 'g', linewidth=1.5, label='Medium')
plt.subplot(5,1,3), plt.plot(x_person_asset, p_asset_high, 'r', linewidth=1.5, label='High'),plt.title("Person Assets $ x10^3")
plt.legend()
# person income
plt.subplot(5,1,4), plt.plot(x_person_income, p_income_low, 'b', linewidth=1.5, label='Low')
plt.subplot(5,1,4), plt.plot(x_person_income, p_income_medium, 'g', linewidth=1.5, label='Medium')
plt.subplot(5,1,4), plt.plot(x_person_income, p_income_high, 'r', linewidth=1.5, label='High')
plt.subplot(5,1,4), plt.plot(x_person_income, p_income_very_high, 'y', linewidth=1.5, label='Very High'),plt.title("Person income $ x10^3")
plt.legend()
# Interest
plt.subplot(5,1,5), plt.plot(x_interest, b_interest_low, 'b', linewidth=1.5, label='Low')
plt.subplot(5,1,5), plt.plot(x_interest, b_interest_medium, 'g', linewidth=1.5, label='Medium')
plt.subplot(5,1,5), plt.plot(x_interest, b_interest_high, 'r', linewidth=1.5, label='High'),plt.title("Interest %")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Defining Fuzzy Output Set Ranges
###Code
x_house = np.arange(0, 10, .01) # House evaluation range
x_applicant = np.arange(0, 10, .01) # applicant evalutaion range
x_credit = np.arange(0, 500, .5) # Credit evalutaion Range $ x10^3
###Output
_____no_output_____
###Markdown
Defining the Fuzzy Output Sets
###Code
# house evalutation output fuzzy sets
house_very_low = fuzz.trimf(x_house, [0, 0, 3])
house_low = fuzz.trimf(x_house, [0, 3, 6])
house_medium = fuzz.trimf(x_house, [2, 5, 8])
house_high = fuzz.trimf(x_house, [4, 7, 10])
house_very_high = fuzz.trimf(x_house, [7, 10, 10])
# applicant evalutation output fuzzy sets
applicant_low = fuzz.trapmf(x_applicant, [0, 0, 2, 4])
applicant_medium = fuzz.trimf(x_applicant, [2, 5, 8])
applicant_high = fuzz.trapmf(x_applicant, [6, 8, 10, 10])
# credit evalutation output fuzzy sets
credit_very_low = fuzz.trimf(x_credit, [0, 0, 125])
credit_low = fuzz.trimf(x_credit, [0, 125, 250])
credit_medium = fuzz.trimf(x_credit, [125, 250, 375])
credit_high = fuzz.trimf(x_credit, [250, 375, 500])
credit_very_high = fuzz.trimf(x_credit, [375, 500, 500])
###Output
_____no_output_____
###Markdown
Showing the Defined Fuzzy Outputs
###Code
plt.rcParams["figure.figsize"] = 15, 12
# house evaluation
plt.subplot(3,1,1), plt.plot(x_house, house_very_low, 'c', linewidth=1.5, label='Very Low')
plt.subplot(3,1,1), plt.plot(x_house, house_low, 'b', linewidth=1.5, label='Low')
plt.subplot(3,1,1), plt.plot(x_house, house_medium, 'g', linewidth=1.5, label='Medium')
plt.subplot(3,1,1), plt.plot(x_house, house_high, 'r', linewidth=1.5, label='High')
plt.subplot(3,1,1), plt.plot(x_house, house_very_high, 'y', linewidth=1.5, label='Very High'),plt.title("House Evaluation Output")
plt.legend()
# applicant evaluation
plt.subplot(3,1,2), plt.plot(x_applicant, applicant_low, 'b', linewidth=1.5, label='Low')
plt.subplot(3,1,2), plt.plot(x_applicant, applicant_medium, 'g', linewidth=1.5, label='Medium')
plt.subplot(3,1,2), plt.plot(x_applicant, applicant_high, 'r', linewidth=1.5, label='High'),plt.title("Applicant Evalutaion Output")
plt.legend()
# credit evaluation
plt.subplot(3,1,3), plt.plot(x_credit, credit_very_low, 'c', linewidth=1.5, label='Very Low')
plt.subplot(3,1,3), plt.plot(x_credit, credit_low, 'b', linewidth=1.5, label='Low')
plt.subplot(3,1,3), plt.plot(x_credit, credit_medium, 'g', linewidth=1.5, label='Medium')
plt.subplot(3,1,3), plt.plot(x_credit, credit_high, 'r', linewidth=1.5, label='High')
plt.subplot(3,1,3), plt.plot(x_credit, credit_very_high, 'y', linewidth=1.5, label='Very High'),plt.title("Credit Value $ x10^3")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
AND and OR helper functions- For "and" we used minimum fonction to apply rules- For "or" we used maximum function to apply rules
###Code
def and_rule(x, y, z):
rule = np.fmin(x, y)
act = np.fmin(rule, z)
return act
def or_rule(x, y, z):
rule = np.fmax(x, y)
act = np.fmax(rule, z)
return act
###Output
_____no_output_____
###Markdown
House Evaluation Rule base 11. If (Market_value is Low) then (House is Low) - Market_value == Low AND House == Low ==> C12. If (Location is Bad) then (House is Low) - Location == Bad AND House == Low ==> C23. If (Location is Bad) and (Market_value is Low) then (House is Very_low) - ( Location == Bad AND Market_value == Low ) AND House == Very_Low ==> C34. If (Location is Bad) and (Market_value is Medium) then (House is Low) - ( Location == Bad AND Market_value == Medium ) AND House == Low ==> C45. If (Location is Bad) and (Market_value is High) then (House is Medium) - ( Location == Bad AND Market_value == High ) AND House == Medium ==> C56. If (Location is Bad) and (Market_value is Very_high) then (House is High) - ( Location == Bad AND Market_value == Very_high ) AND House == High ==> C67. If (Location is Fair) and (Market_value is Low) then (House is Low) - ( Location == Fair AND Market_value == Low ) AND House == Low ==> C78. If (Location is Fair) and (Market_value is Medium) then (House is Medium) - ( Location == Fair AND Market_value == Medium ) AND House == Medium ==> C89. If (Location is Fair) and (Market_value is High) then (House is High) - ( Location == Fair AND Market_value == High ) AND House == High ==> C910. If (Location is Fair) and (Market_value is Very_high) then (House is Very_high) - ( Location == Fair AND Market_value == Very_high ) AND House == Very_high ==> C1011. If (Location is Excellent) and (Market_value is Low) then (House is Medium) - ( Location == Excellent AND Market_value == Low ) AND House == Medium ==> C1112. If (Location is Excellent) and (Market_value is Medium) then (House is High) - ( Location == Excellent AND Market_value == Medium ) AND House == High ==> C1213. If (Location is Excellent) and (Market_value is High) then (House is Very_high) - ( Location == Excellent AND Market_value == high ) AND House == Very_high ==> C1314. If (Location is Excellent) and (Market_value is Very_high) then (House is Very_high) - ( Location == Excellent AND Market_value == Very_high ) AND House == Very_high ==> C14 Rule Base 1 Combining:- => Rule = C1 OR C2 OR C3 OR C4 OR C5 OR C6 OR C7 OR C8 OR C9 OR C10 OR C11 OR C12 OR C13 OR C14
###Code
def apply_house_rules(market_value, location, verbose=0):
# house market value functions
market_level_low = fuzz.interp_membership(x_house_market, market_low, market_value)
market_level_medium = fuzz.interp_membership(x_house_market, market_medium, market_value)
market_level_high = fuzz.interp_membership(x_house_market, market_high, market_value)
market_level_very_high = fuzz.interp_membership(x_house_market, market_very_high, market_value)
# house location
location_level_bad = fuzz.interp_membership(x_house_location, location_bad, location)
location_level_fair = fuzz.interp_membership(x_house_location, location_fair, location)
location_level_excellent = fuzz.interp_membership(x_house_location, location_excellent, location)
### rules
# 1. If (Market_value is Low) then (House is Low)
house_act_low1 = np.fmin(market_level_low, house_low)
# 2. If (Location is Bad) then (House is Low)
house_act_low2 = np.fmin(location_level_bad, house_low)
# 3. If (Location is Bad) and (Market_value is Low) then (House is Very_low)
house_act_very_low = and_rule(location_level_bad, market_level_low, house_very_low)
# 4. If (Location is Bad) and (Market_value is Medium) then (House is Low)
house_act_low3 = and_rule(location_level_bad, market_level_medium, house_low)
# 5. If (Location is Bad) and (Market_value is High) then (House is Medium)
house_act_medium1 = and_rule(location_level_bad, market_level_high, house_medium)
# 6. If (Location is Bad) and (Market_value is Very_high) then (House is High)
house_act_high1 = and_rule(location_level_bad, market_level_very_high, house_high)
# 7. If (Location is Fair) and (Market_value is Low) then (House is Low)
house_act_low4 = and_rule(location_level_fair, market_level_low, house_low)
# 8. If (Location is Fair) and (Market_value is Medium) then (House is Medium)
house_act_medium2 = and_rule(location_level_fair, market_level_medium, house_medium)
# 9. If (Location is Fair) and (Market_value is High) then (House is High)
house_act_high2 = and_rule(location_level_fair, market_level_high, house_high)
# 10. If (Location is Fair) and (Market_value is Very_high) then (House is Very_high)
house_act_very_high1 = and_rule(location_level_fair, market_level_very_high, house_very_high)
# 11. If (Location is Excellent) and (Market_value is Low) then (House is Medium)
house_act_medium3 = and_rule(location_level_excellent, market_level_low, house_medium)
# 12. If (Location is Excellent) and (Market_value is Medium) then (House is High)
house_act_high3 = and_rule(location_level_excellent, market_level_medium, house_high)
# 13. If (Location is Excellent) and (Market_value is High) then (House is Very_high)
house_act_very_high2 = and_rule(location_level_excellent, market_level_high, house_very_high)
# 14. If (Location is Excellent) and (Market_value is Very_high) then (House is Very_high)
house_act_very_high3 = and_rule(location_level_excellent, market_level_very_high, house_very_high)
# combine the rules
step = or_rule(house_act_low1, house_act_low2, house_act_low3)
house_act_low = np.fmax(step, house_act_low4)
house_act_medium = or_rule(house_act_medium1, house_act_medium2, house_act_medium3)
house_act_high = or_rule(house_act_high1, house_act_high2, house_act_high3)
house_act_very_high = or_rule(house_act_very_high1, house_act_very_high2, house_act_very_high3)
step = or_rule(house_act_very_low, house_act_low, house_act_medium)
house = or_rule(step, house_act_high, house_act_very_high)
# if we want to see the graph of the output
if verbose == 1:
plt.rcParams["figure.figsize"] = 15, 4
plt.plot(x_house, house_very_low, 'c', linestyle='--', linewidth=1.5, label='Very Low')
plt.plot(x_house, house_low, 'b', linestyle='--', linewidth=1.5, label='Low')
plt.plot(x_house, house_medium, 'g', linestyle='--', linewidth=1.5, label='Medium')
plt.plot(x_house, house_high, 'r', linestyle='--', linewidth=1.5, label='High')
plt.plot(x_house, house_very_high, 'y', linestyle='--', linewidth=1.5, label='Very High'),plt.title("House Evaluation Output")
plt.legend()
plt.fill_between(x_house, house, color='r')
plt.ylim(-0.1, 1.1)
plt.grid(True)
plt.show()
return house
###Output
_____no_output_____
###Markdown
Example Output For Rule Base 1
###Code
## triying the function with example
h_eval = apply_house_rules(250, 4, verbose=1)
###Output
_____no_output_____
###Markdown
Applicant Evaluation Rule base 21. If (Asset is Low) and (Income is Low) then (Applicant is Low) - ( Asset == Low AND Income == Low ) AND Applicant == Low ==> C12. If (Asset is Low) and (Income is Medium) then (Applicant is Low) - ( Asset == Low AND Income == Medium ) AND Applicant == Low ==> C23. If (Asset is Low) and (Income is High) then (Applicant is Medium) - ( Asset == Low AND Income == High ) AND Applicant == Medium ==> C34. If (Asset is Low) and (Income is Very_high) then (Applicant is High) - ( Asset == Low AND Income == Very_high ) AND Applicant == High ==> C45. If (Asset is Medium) and (Income is Low) then (Applicant is Low) - ( Asset == Medium AND Income == Low ) AND Applicant == Low ==> C56. If (Asset is Medium) and (Income is Medium) then (Applicant is Medium) - ( Asset == Medium AND Income == Medium ) AND Applicant == Medium ==> C67. If (Asset is Medium) and (Income is High) then (Applicant is High) - ( Asset == Medium AND Income == High ) AND Applicant == High ==> C78. If (Asset is Medium) and (Income is Very_high) then (Applicant is High) - ( Asset == Medium AND Income == Very_high ) AND Applicant == High ==> C89. If (Asset is High) and (Income is Low) then (Applicant is Medium) - ( Asset == High AND Income == Low ) AND Applicant == Medium ==> C910. If (Asset is High) and (Income is Medium) then (Applicant is Medium) - ( Asset == High AND Income == Medium ) AND Applicant == Medium ==> C1011. If (Asset is High) and (Income is High) then (Applicant is High) - ( Asset == High AND Income == High ) AND Applicant == High ==> C1112. If (Asset is High) and (Income is Very_high) then (Applicant is High) - ( Asset == High AND Income == Very_high ) AND Applicant == High ==> C12 Rule Base 2 Combining - => Rule = C1 OR C2 OR C3 OR C4 OR C5 OR C6 OR C7 OR C8 OR C9 OR C10 OR C11 OR C12
###Code
def apply_applicant_rules(assets, income, verbose=0):
# person asset
p_asset_level_low = fuzz.interp_membership(x_person_asset, p_asset_low, assets)
p_asset_level_medium = fuzz.interp_membership(x_person_asset, p_asset_medium, assets)
p_asset_level_high = fuzz.interp_membership(x_person_asset, p_asset_high, assets)
# person income
p_income_level_low = fuzz.interp_membership(x_person_income, p_income_low, income)
p_income_level_medium = fuzz.interp_membership(x_person_income, p_income_medium, income)
p_income_level_high = fuzz.interp_membership(x_person_income, p_income_high, income)
p_income_level_very_high = fuzz.interp_membership(x_person_income, p_income_very_high, income)
# 1. If (Asset is Low) and (Income is Low) then (Applicant is Low)
applicant_act_low1 = and_rule(p_asset_level_low, p_income_level_low, applicant_low)
# 2. If (Asset is Low) and (Income is Medium) then (Applicant is Low)
applicant_act_low2 = and_rule(p_asset_level_low, p_income_level_medium, applicant_low)
# 3. If (Asset is Low) and (Income is High) then (Applicant is Medium)
applicant_act_medium1 = and_rule(p_asset_level_low, p_income_level_high, applicant_medium)
# 4. If (Asset is Low) and (Income is Very_high) then (Applicant is High)
applicant_act_high1 = and_rule(p_asset_level_low, p_income_level_very_high, applicant_high)
# 5. If (Asset is Medium) and (Income is Low) then (Applicant is Low)
applicant_act_low3 = and_rule(p_asset_level_medium, p_income_level_low, applicant_low)
# 6. If (Asset is Medium) and (Income is Medium) then (Applicant is Medium)
applicant_act_medium2 = and_rule(p_asset_level_medium, p_income_level_medium, applicant_medium)
# 7. If (Asset is Medium) and (Income is High) then (Applicant is High)
applicant_act_high2 = and_rule(p_asset_level_medium, p_income_level_high, applicant_high)
# 8. If (Asset is Medium) and (Income is Very_high) then (Applicant is High)
applicant_act_high3 = and_rule(p_asset_level_medium, p_income_level_very_high, applicant_high)
# 9. If (Asset is High) and (Income is Low) then (Applicant is Medium)
applicant_act_medium3 = and_rule(p_asset_level_high, p_income_level_low, applicant_medium)
# 10. If (Asset is High) and (Income is Medium) then (Applicant is Medium)
applicant_act_medium4 = and_rule(p_asset_level_high, p_income_level_medium, applicant_medium)
# 11. If (Asset is High) and (Income is High) then (Applicant is High)
applicant_act_high4 = and_rule(p_asset_level_high, p_income_level_high, applicant_high)
# 12. If (Asset is High) and (Income is Very_high) then (Applicant is High)
applicant_act_high5 = and_rule(p_asset_level_high, p_income_level_very_high, applicant_high)
# combine the rules
applicant_act_low = or_rule(applicant_act_low1, applicant_act_low2, applicant_act_low3)
step = or_rule(applicant_act_medium1, applicant_act_medium2, applicant_act_medium3)
applicant_act_medium = np.fmax(step, applicant_act_medium4)
step = or_rule(applicant_act_high1, applicant_act_high2, applicant_act_high3)
applicant_act_high = or_rule(step, applicant_act_high4, applicant_act_high5)
applicant = or_rule(applicant_act_low, applicant_act_medium, applicant_act_high)
# if we want to see the graph of the output
if verbose == 1:
plt.rcParams["figure.figsize"] = 15, 4
plt.plot(x_applicant, applicant_low, 'b', linestyle='--', linewidth=1.5, label='Low')
plt.plot(x_applicant, applicant_medium, 'g', linestyle='--', linewidth=1.5, label='Medium')
plt.plot(x_applicant, applicant_high, 'r', linestyle='--', linewidth=1.5, label='High'),plt.title("Applicant Evalutaion Output")
plt.legend()
plt.fill_between(x_applicant, applicant, color='r')
plt.ylim(-0.1, 1.1)
plt.grid(True)
plt.show()
return applicant
###Output
_____no_output_____
###Markdown
Example Output For Rule Base 2
###Code
## triying the function with example
a_eval = apply_applicant_rules(550, 45, verbose=1)
###Output
_____no_output_____
###Markdown
Credit Evaluation Rule Base 31. If (Income is Low) and (Interest is Medium) then (Credit is Very_low) - ( Income == Low AND Interest == Medium ) AND Credit == Very_Low ==> C12. If (Income is Low) and (Interest is High) then (Credit is Very_low) - ( Income == Low AND Interest == High ) AND Credit == Very_low ==> C23. If (Income is Medium) and (Interest is High) then (Credit is Low) - ( Income == Medium AND Interest == High ) AND Credit == Low ==> C34. If (Applicant is Low) then (Credit is Very_low) - Applicant == Low AND Credit == Very_low ==> C45. If (House is Very_low) then (Credit is Very_low) - House == Very_low AND Credit == Very_low ==> C56. If (Applicant is Medium) and (House is Very_low) then (Credit is Low) - ( Applicant == Medium AND House == Very_low ) AND Credit == Low ==> C67. If (Applicant is Medium) and (House is Low) then (Credit is Low) - ( Applicant == Medium AND House == Low ) AND Credit == Low ==> C78. If (Applicant is Medium) and (House is Medium) then (Credit is Medium) - ( Applicant == Medium AND House == Medium ) AND Credit == Medium ==> C89. If (Applicant is Medium) and (House is High) then (Credit is High) - ( Applicant == Medium AND House == High ) AND Credit == High ==> C910. If (Applicant is Medium) and (House is Very_high) then (Credit is High) - ( Applicant == Medium AND House == Very_high ) AND Credit == High ==> C1011. If (Applicant is High) and (House is Very_low) then (Credit is Low) - ( Applicant == High AND House == Very_low ) AND Credit == Low ==> C1112. If (Applicant is High) and (House is Low) then (Credit is Medium) - ( Applicant == High AND House == Low ) AND Credit == Medium ==> C1213. If (Applicant is High) and (House is Medium) then (Credit is High) - ( Applicant == High AND House == Medium ) AND Credit == High ==> C1314. If (Applicant is High) and (House is High) then (Credit is High) - ( Applicant == High AND House == High ) AND Credit == High ==> C1415. If (Applicant is High) and (House is Very_high) then (Credit is Very_high) - ( Applicant == High AND House == Very_high ) AND Credit == Very_high ==> C15 Rule Base 3 Combining - => Rule = C1 OR C2 OR C3 OR C4 OR C5 OR C6 OR C7 OR C8 OR C9 OR C10 OR C11 OR C12 OR C13 OR C14 OR C15
###Code
def apply_credit_rules(house, income, interest, applicant):
# house
house_level_very_low = np.fmin(house, house_low)
house_level_low = np.fmin(house, house_low)
house_level_medium = np.fmin(house, house_medium)
house_level_high = np.fmin(house, house_high)
house_level_very_high = np.fmin(house, house_very_high)
# person income
p_income_level_low = fuzz.interp_membership(x_person_income, p_income_low, income)
p_income_level_medium = fuzz.interp_membership(x_person_income, p_income_medium, income)
p_income_level_high = fuzz.interp_membership(x_person_income, p_income_high, income)
p_income_level_very_high = fuzz.interp_membership(x_person_income, p_income_very_high, income)
# interest
b_interest_level_low = fuzz.interp_membership(x_interest, b_interest_low, interest)
b_interest_level_medium = fuzz.interp_membership(x_interest, b_interest_medium, interest)
b_interest_level_high = fuzz.interp_membership(x_interest, b_interest_high, interest)
# applicant
applicant_level_low = np.fmin(applicant, applicant_low)
applicant_level_medium = np.fmin(applicant, applicant_medium)
applicant_level_high = np.fmin(applicant, applicant_high)
# 1. If (Income is Low) and (Interest is Medium) then (Credit is Very_low)
credit_act_very_low1 = and_rule(p_income_level_low, b_interest_level_medium, credit_very_low)
# 2. If (Income is Low) and (Interest is High) then (Credit is Very_low)
credit_act_very_low2 = and_rule(p_income_level_low, b_interest_level_high, credit_very_low)
# 3. If (Income is Medium) and (Interest is High) then (Credit is Low)
credit_act_low1 = and_rule(p_income_level_medium, b_interest_level_high, credit_low)
# 4. If (Applicant is Low) then (Credit is Very_low)
credit_act_very_low3 = np.fmin(applicant_level_low, credit_very_low)
# 5. If (House is Very_low) then (Credit is Very_low)
credit_act_very_low4 = np.fmin(house_level_very_low, credit_very_low)
# 6. If (Applicant is Medium) and (House is Very_low) then (Credit is Low)
credit_act_low2 = and_rule(applicant_level_medium, house_level_very_low, credit_low)
# 7. If (Applicant is Medium) and (House is Low) then (Credit is Low)
credit_act_low3 = and_rule(applicant_level_medium, house_level_low, credit_low)
# 8. If (Applicant is Medium) and (House is Medium) then (Credit is Medium)
credit_act_medium1 = and_rule(applicant_level_medium, house_level_medium, credit_medium)
# 9. If (Applicant is Medium) and (House is High) then (Credit is High)
credit_act_high1 = and_rule(applicant_level_medium, house_level_high, credit_high)
# 10. If (Applicant is Medium) and (House is Very_high) then (Credit is High)
credit_act_high2 = and_rule(applicant_level_medium, house_level_very_high, credit_high)
# 11. If (Applicant is High) and (House is Very_low) then (Credit is Low)
credit_act_low4 = and_rule(applicant_level_high, house_level_very_low, credit_low)
# 12. If (Applicant is High) and (House is Low) then (Credit is Medium)
credit_act_medium2 = and_rule(applicant_level_high, house_level_low, credit_medium)
# 13. If (Applicant is High) and (House is Medium) then (Credit is High)
credit_act_high3 = and_rule(applicant_level_high, house_level_medium, credit_high)
# 14. If (Applicant is High) and (House is High) then (Credit is High)
credit_act_high4 = and_rule(applicant_level_high, house_level_high, credit_high)
# 15. If (Applicant is High) and (House is Very_high) then (Credit is Very_high)
credit_act_very_high = and_rule(applicant_level_high, house_level_very_high, credit_very_high)
step = or_rule(credit_act_very_low1, credit_act_very_low2, credit_act_very_low3)
credit_act_very_low = np.fmax(step, credit_act_very_low4)
step = or_rule(credit_act_low1, credit_act_low2, credit_act_low3)
credit_act_low = np.fmax(step, credit_act_low4)
credit_act_medium = np.fmax(credit_act_medium1, credit_act_medium2)
step = or_rule(credit_act_high1, credit_act_high2, credit_act_high3)
credit_act_high = np.fmax(step, credit_act_high4)
# credit_act_very_high there is just one
step = or_rule(credit_act_very_low, credit_act_low, credit_act_medium)
credit = or_rule(step, credit_act_high, credit_act_very_high)
return credit
###Output
_____no_output_____
###Markdown
Example Output For Base 3
###Code
# triying the function with example
credit_eval = apply_credit_rules(h_eval, 45, 4, a_eval)
plt.rcParams["figure.figsize"] = 15, 4
plt.plot(x_credit, credit_very_low, 'c', linestyle='--', linewidth=1.5, label='Very Low')
plt.plot(x_credit, credit_low, 'b', linestyle='--', linewidth=1.5, label='Low')
plt.plot(x_credit, credit_medium, 'g', linestyle='--', linewidth=1.5, label='Medium')
plt.plot(x_credit, credit_high, 'r', linestyle='--', linewidth=1.5, label='High')
plt.plot(x_credit, credit_very_high, 'y', linestyle='--', linewidth=1.5, label='Very High'),plt.title("Credit Value $ x10^3")
plt.legend()
plt.fill_between(x_credit, credit_eval)
plt.ylim(-0.1, 1.1)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Calling the ruleing functions sequentialy
###Code
def apply_all_rules(market_value, location, assets, income, interest, verbose=0):
house = apply_house_rules(market_value, location, verbose)
applicant = apply_applicant_rules(assets,income, verbose)
credit = apply_credit_rules(house, income, interest, applicant)
return credit
###Output
_____no_output_____
###Markdown
Example Output After All Bases
###Code
credit = apply_all_rules(150, 3, 550, 45, 4, verbose=1)
plt.rcParams["figure.figsize"] = 15, 4
plt.plot(x_credit, credit_very_low, 'c', linestyle='--', linewidth=1.5, label='Very Low')
plt.plot(x_credit, credit_low, 'b', linestyle='--', linewidth=1.5, label='Low')
plt.plot(x_credit, credit_medium, 'g', linestyle='--', linewidth=1.5, label='Medium')
plt.plot(x_credit, credit_high, 'r', linestyle='--', linewidth=1.5, label='High')
plt.plot(x_credit, credit_very_high, 'y', linestyle='--', linewidth=1.5, label='Very High'),plt.title("Credit Value $ x10^3")
plt.legend()
plt.fill_between(x_credit, credit, color='b')
plt.ylim(-0.1, 1.1)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Making a Decision- After all the rules applied, we defuzify the output of the rules with mean of maximum and we generate a single value as a decision of the system
###Code
def make_decision(market_value, location, assets, income, interest, verbose=0):
credit = apply_all_rules(market_value, location, assets, income, interest, verbose)
# defuzzification with mean of maximum
defuzz_credit = fuzz.defuzz(x_credit, credit, 'mom')
max_n = np.max(credit)
if (verbose == 1):
plt.rcParams["figure.figsize"] = 15, 4
plt.plot(x_credit, credit_very_low, 'c', linestyle='--', linewidth=1.5, label='Very Low')
plt.plot(x_credit, credit_low, 'b', linestyle='--', linewidth=1.5, label='Low')
plt.plot(x_credit, credit_medium, 'g', linestyle='--', linewidth=1.5, label='Medium')
plt.plot(x_credit, credit_high, 'r', linestyle='--', linewidth=1.5, label='High')
plt.plot(x_credit, credit_very_high, 'y', linestyle='--', linewidth=1.5, label='Very High'),plt.title("Credit Value $ x10^3")
plt.legend()
plt.fill_between(x_credit, credit, color='b')
plt.ylim(-0.1, 1.1)
plt.grid(True)
plt.plot(defuzz_credit, max_n, 'X', color='r')
plt.show()
print "Output: ", defuzz_credit, "x10^3 $"
return defuzz_credit
credit_decision = make_decision(150, 3, 550, 45, 4, verbose=1)
credit_decision = make_decision(600, 6, 500, 40, 5, verbose=1)
###Output
_____no_output_____ |
PUBG/eda-team-strategy-guide.ipynb | ###Markdown
Visualise what features are importantWhy should you read through this kernel? The goal is to have a visual guide on which strategy leads to the win:- the data will be read and memory footprint will be reduced;- aggregations of the data over teams are performed;- a baseline **LightGBM** model **on team level** will be trained (for detailed code on player level [see my other kernel](https://www.kaggle.com/mlisovyi/pubg-survivor-kit));- the training is implemented with a simple train/test split;- **use [SHAP package](https://github.com/slundberg/shap) for model explanation**;- **use [LIME package](https://github.com/marcotcr/lime) (described in the [paper](https://arxiv.org/abs/1602.04938)) for model explanation** We will use only a subset of games (=matches) to speed-up processing, as SHAP is very slow (LIME is somewhat faster, as it does linear models locally)
###Code
# The number of MATCHES to use in training. Whole training dataset is used anyway. Use it to have fast turn-around. Set to 50k for all entries
max_matches_trn=5000
# The number of entries from test to read in. Use it to have fast turn-around. Set to None for all entries
max_events_tst=None
# Number on CV folds
n_cv=3
###Output
_____no_output_____
###Markdown
Define a function to reduce memory foorprint
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.simplefilter(action='ignore', category=Warning)
from sklearn.metrics import mean_squared_error, mean_absolute_error
import os
print(os.listdir("../input"))
def reduce_mem_usage(df):
""" iterate through all the columns of a dataframe and modify the data type
to reduce memory usage.
"""
start_mem = df.memory_usage().sum() / 1024**2
print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
for col in df.columns:
col_type = df[col].dtype
if col_type != object and col_type.name != 'category' and 'datetime' not in col_type.name:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
elif 'datetime' not in col_type.name:
df[col] = df[col].astype('category')
end_mem = df.memory_usage().sum() / 1024**2
print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
return df
###Output
_____no_output_____
###Markdown
Read in the data
###Code
df_trn = pd.read_csv('../input/train.csv', nrows=None)
df_trn = reduce_mem_usage(df_trn)
df_trn = df_trn.query('matchId < @max_matches_trn')
print('Number of training entries after selecting a subset of matches: {}'.format(df_trn.shape[0]))
# we will NOT use in training
features_not2use = ['Id', 'groupId', 'matchId', 'numGroups']
###Output
_____no_output_____
###Markdown
Feature engineering: group by teams
###Code
agg_team = {c: ['mean', 'min', 'max', 'sum'] for c in [c for c in df_trn.columns if c not in features_not2use and c != 'winPlacePerc']}
agg_team['numGroups'] = ['size']
print(agg_team.keys())
def preprocess(df):
df_gb = df.groupby('groupId').agg(agg_team)
df_gb.columns = pd.Index([e[0] + "_" + e[1].upper() for e in df_gb.columns])
return df_gb
df_trn_gb = preprocess(df_trn)
#this is needed, since for some teams sum of rideDistance is infinite. This is not swallowed by LIME
df_trn_gb = df_trn_gb.replace({np.inf: -1})
y = df_trn.groupby('groupId')['winPlacePerc'].median()
# since we train on the group and out final metric is on user level, we want to assign group size as the weight
w = df_trn_gb['numGroups_SIZE']
###Output
_____no_output_____
###Markdown
Simple train/test split
###Code
from sklearn.model_selection import train_test_split
X_trn, X_tst, y_trn, y_tst = train_test_split(df_trn_gb, y, test_size=0.33, random_state=42)
###Output
_____no_output_____
###Markdown
Train a modelStart by defining handy helper functions...
###Code
%%time
import lightgbm as lgb
from sklearn.base import clone
def train_single_model(clf_, X_, y_, random_state_=314, opt_parameters_={}, fit_params_={}):
'''
A wrapper to train a model with particular parameters
'''
c = clone(clf_)
c.set_params(**opt_parameters_)
c.set_params(random_state=random_state_)
return c.fit(X_, y_, **fit_params_)
mdl_ = lgb.LGBMRegressor(max_depth=-1, min_child_samples=400, random_state=314, silent=True, metric='None',
n_jobs=4, n_estimators=5000, learning_rate=0.1)
fit_params_ = {"early_stopping_rounds":100,
"eval_metric" : 'mae',
'eval_names': ['train', 'early_stop'],
'verbose': 500,
'eval_set': [(X_trn,y_trn), (X_tst,y_tst)],
'sample_weight': y_trn.index.map(w).values,
'eval_sample_weight': [None, y_tst.index.map(w).values]
}
opt_parameters_ = {'objective': 'mae', 'colsample_bytree': 0.75, 'min_child_weight': 10.0, 'num_leaves': 30, 'reg_alpha': 1}
mdl = train_single_model(mdl_, X_trn, y_trn,
fit_params_=fit_params_,
opt_parameters_=opt_parameters_
)
###Output
_____no_output_____
###Markdown
Model interpretation with SHAP
###Code
import shap
shap.initjs()
%%time
explainer=shap.TreeExplainer(mdl.booster_)
shap_values = explainer.shap_values(X_tst)
###Output
_____no_output_____
###Markdown
Visualise what effect features have on the final prediction. Quoting the SHAP github:> The plot below sorts features by the sum of SHAP value magnitudes over all samples, and uses SHAP values to show the distribution of the impacts each feature has on the model output. The color represents the feature value (**red high, blue low**). This reveals for example that a high `walkDistance_MEAN` (average distance walked by team members) increases the predicted chance of winning (the `winPlacePerc`).
###Code
shap.summary_plot(shap_values, X_tst)
###Output
_____no_output_____
###Markdown
Let's also look at the impact of various features on the predictions for each individual team. Quoting the documentation again:> [The plot below]... shows features each contributing to push the model output from the base value (the average model output over the training dataset we passed) to the model output. Features pushing the prediction higher are shown in red, those pushing the prediction lower are in blue.Note that the plot is actually interactive, so you can see names of variables, if you put cursor on individual components.
###Code
for i in range(5):
display(shap.force_plot(explainer.expected_value, shap_values[i,:], X_trn.iloc[i,:]))
###Output
_____no_output_____
###Markdown
And finally, let's look how different feature interactions affect the predicted placement. Quoting the documentation:> To understand how a single feature effects the output of the model we can plot **the SHAP value of that feature vs. the value of the feature for all the examples in a dataset**. Since SHAP values represent a feature's responsibility for a change in the model output, the plot below represents **the change in predicted placement as either of `'killPlace_MAX', 'walkDistance_MEAN', 'weaponsAcquired_MIN'`changes**. Note that This plot also shows the strongest interaction on the feature with another feature in the dataset:> Vertical dispersion at a single value of the X axis represents interaction effects with other features. To help reveal these interactions `dependence_plot` automatically selects another feature for coloring. In this case coloring by features on the Z axis highlights interactions.
###Code
for f in ['killPlace_MAX', 'walkDistance_MEAN', 'weaponsAcquired_MIN']:
shap.dependence_plot(f, shap_values, X_tst)
###Output
_____no_output_____
###Markdown
Model interpretation with LIME
###Code
import lime
from lime.lime_tabular import LimeTabularExplainer
###Output
_____no_output_____
###Markdown
Note, that LIME seems to work with nupy arrays only and does to digest pandas objects. So we will use `pd.DataFrame.values`
###Code
explainer = LimeTabularExplainer(X_trn.values,
feature_names=X_trn.columns,
class_names=[],
verbose=True,
mode='regression')
###Output
_____no_output_____
###Markdown
Build explanations for the first 5 examples
###Code
exp= []
for i in range(5):
exp.append(explainer.explain_instance(X_tst.iloc[i,:].values, mdl.predict, num_features=10))
###Output
_____no_output_____
###Markdown
Visualise which cuts were most important in the decision making for those 5 examples
###Code
for e in exp:
_ = e.as_pyplot_figure()
###Output
_____no_output_____
###Markdown
Visualise explanation for those 5 examples
###Code
for e in exp:
_ = e.show_in_notebook()
###Output
_____no_output_____ |
Aula05_Claudio/Aula05.ipynb | ###Markdown
Questão 1 EnunciadoCrie uma lista qualquer e faça um programa que imprima cada elemento da lista usando o for.
###Code
times = ['CRUZEIRO','ATLETICO','FLAMENGO','PALMEIRAS']
for time in times:
print(time)
###Output
CRUZEIRO
ATLETICO
FLAMENGO
PALMEIRAS
###Markdown
Questão 2 EnunciadoFaça um programa que imprima todos os itens de uma lista usando while e compare com o exercício 1.
###Code
times = ['CRUZEIRO','ATLETICO','FLAMENGO','PALMEIRAS']
i = 0
while i < len(times):
print(times[i])
i+=1
###Output
CRUZEIRO
ATLETICO
FLAMENGO
PALMEIRAS
###Markdown
Questão 3 EnunciadoFaça um programa que peça para o usuário digitar um número n e imprima uma lista com todos os números de 0 a n-1.Exemplo: se o usuário digitar 5, o programa deve imprimir [0, 1, 2, 3, 4]
###Code
numero = int(input("Digite um número: "))
i = 0
resultado = []
while i < numero:
resultado.append(i)
i+=1
print(resultado)
###Output
Digite um número: 5
[0, 1, 2, 3, 4]
###Markdown
Questão 4 EnunciadoFaça um programa que olhe todos os itens de uma lista e diga quantos deles são pares.
###Code
lista =list(range(10))
i = 0
resultado = []
while i<len(lista):
if lista[i]%2==0:
resultado.append(lista[i])
i+=1
print("A quantidade de números pares é:", len(resultado))
print(resultado)
###Output
A quantidade de números pares é: 5
[0, 2, 4, 6, 8]
###Markdown
Questão 5 EnunciadoFaça um programa que imprima o maior número de uma lista, sem usar a função max().
###Code
import random
lista = list(random.sample(range(1,100),50))
i=1
maior=0
print(lista)
while i<len(lista):
if lista[i]>lista[i-1] and lista[i]>maior:
maior=lista[i]
i+=1
print(maior)
###Output
[74, 19, 52, 58, 16, 20, 10, 77, 56, 34, 60, 96, 31, 70, 63, 11, 15, 44, 18, 12, 91, 21, 90, 82, 85, 39, 62, 89, 9, 36, 71, 46, 80, 24, 95, 92, 65, 2, 50, 49, 3, 53, 86, 40, 27, 28, 37, 75, 81, 7]
96
###Markdown
Questão 6 EnunciadoAgora usando a função max() faça um programa que imprima os três maiores números de uma lista.Dica: Use o método próprio de listas .remove().
###Code
import random
lista = list(random.sample(range(1,100),50))
i=1
maior=0
print(lista)
while i<=3:
print("O ",i,"º maior número é:", max(lista))
lista.remove(max(lista))
i+=1
###Output
[67, 1, 38, 50, 57, 97, 33, 96, 26, 56, 15, 6, 65, 14, 21, 76, 43, 20, 16, 93, 42, 52, 28, 85, 99, 61, 69, 4, 80, 58, 84, 5, 63, 23, 31, 44, 13, 95, 32, 49, 92, 90, 64, 25, 34, 91, 98, 48, 72, 86]
O 1 º maior número é: 99
O 2 º maior número é: 98
O 3 º maior número é: 97
###Markdown
Questão 7 EnunciadoFaça um programa que, dadas duas listas de mesmo tamanho, crie uma nova lista com cada elemento igual a soma dos elementos da lista 1 com os da lista 2, na mesma posição.Exemplo:Dadas lista1 = [1, 4, 5] e lista2 = [2, 2, 3], então lista3 = [1+2, 4+2, 5+3] = [3, 6, 8]
###Code
lista1 = [1,4,5]
lista2 = [2,2,3]
i=0
resultado = []
while i<len(lista1):
resultado.append(lista1[i]+lista2[i])
i+=1
print(resultado)
###Output
[3, 6, 8]
###Markdown
Questão 8 EnunciadoFaça um programa que dadas duas listas de mesmo tamanho, imprima o produto escalar entre elas.OBS: produto escalar é a soma do resultado da multiplicação entre o número na posição i da lista1 pelo número na posição i da lista2, com i variando de 0 ao tamanho da lista.
###Code
lista1 = [3,4]
lista2 = [-2,5]
resultado = 0
for i in range(len(lista1)):
resultado+=lista1[i]*lista2[i]
print("O resultado é: ", resultado)
###Output
O resultado é: 14
###Markdown
Questão 9 EnunciadoFaça um programa que pede para o usuário digitar 5 números e, ao final, imprime uma lista com os 5 números digitados pelo usuário (sem converter os números para int ou float).Exemplo: Se o usuário digitar 1, 5, 2, 3, 6, o programa deve imprimir a lista ['1','5','2','3','6']
###Code
resultado = []
i = 0
for i in range(0,5):
numero = input("Digite o "+str(i+1)+" numero")
resultado.append(numero)
print(resultado)
###Output
Digite o 1 numero1
Digite o 2 numero2
Digite o 3 numero3
Digite o 4 numero4
Digite o 5 numero5
['1', '2', '3', '4', '5']
###Markdown
Questão 10 EnunciadoPegue a lista gerada no exercício anterior e transforme cada um dos itens dessa lista em um float.OBS: Não é para alterar o programa anterior, mas sim a lista gerada por ele.
###Code
resultado = []
i = 0
for i in range(0,5):
numero = input("Digite o "+str(i+1)+" numero")
resultado.append(numero)
print(resultado)
resultado_float=[]
for item in resultado:
resultado_float.append(float(item))
print(resultado_float)
###Output
Digite o 1 numero1
Digite o 2 numero2
Digite o 3 numero3
Digite o 4 numero4
Digite o 5 numero5
['1', '2', '3', '4', '5']
[1.0, 2.0, 3.0, 4.0, 5.0]
###Markdown
Questão 11 EnunciadoFaça um programa que peça as 4 notas bimestrais e mostre a média aritmética delas, usando listas.
###Code
qtd = int(input("Quantidade de Notas: " ))
notas = []
for i in range(0,qtd):
numero = float(input("Digite a "+str(i+1)+" nota: "))
notas.append(numero)
media = sum(int(i) for i in notas)/qtd
print("A média das notas é: ",media )
###Output
Quantidade de Notas: 5
Digite a 1 nota: 5
Digite a 2 nota: 5
Digite a 3 nota: 5
Digite a 4 nota: 5
Digite a 5 nota: 5
A média das notas é: 5.0
###Markdown
Questão 12 EnunciadoSorteie uma lista de 10 números e imprima:a. uma lista com os 4 primeiros números; b. uma lista com os 5 últimos números; c. uma lista contendo apenas os elementos das posições pares; d. uma lista contendo apenas os elementos das posições ímpares; e. a lista inversa da lista sorteada (isto é, uma lista que começa com o último elemento da lista sorteada e termina com o primeiro); f. uma lista inversa dos 5 primeiros números; g. uma lista inversa dos 5 últimos números.
###Code
import random
lista = list(random.sample(range(1,100),10))
lista_original = lista[:]
print("Lista Geral: ",lista)
lista.sort()
print("Lista Ordenada: ", lista)
print("Lista com os 4 primeiros números: ",lista[:4])
print("Uma lista com os 5 últimos números:",lista[-4:])
print("Lista contendo apenas os elementos das posições pares: ",[i for i in lista if i%2==0])
print("Lista contendo apenas os elementos das posições ímpares: ", [i for i in lista if i%2!=0])
lista_original.reverse()
print("Lista invertida: ", lista_original)
print("Lista inversa dos 5 primeiros números: ",lista_original[:4])
print("Lista inversa dos 5 últimos números: ", lista_original[-4:])
###Output
Lista Geral: [55, 56, 52, 2, 21, 22, 76, 18, 61, 14]
Lista Ordenada: [2, 14, 18, 21, 22, 52, 55, 56, 61, 76]
Lista com os 4 primeiros números: [2, 14, 18, 21]
Uma lista com os 5 últimos números: [55, 56, 61, 76]
Lista contendo apenas os elementos das posições pares: [2, 14, 18, 22, 52, 56, 76]
Lista contendo apenas os elementos das posições ímpares: [21, 55, 61]
Lista invertida: [14, 61, 18, 76, 22, 21, 2, 52, 56, 55]
Lista inversa dos 5 primeiros números: [14, 61, 18, 76]
Lista inversa dos 5 últimos números: [2, 52, 56, 55]
###Markdown
Questão 13 EnunciadoFaça um programa que sorteia 10 números entre 0 e 100 e conte quantos números sorteados são maiores que 50.
###Code
import random
lista = list(random.sample(range(1,100),10))
print("A lista geral é: ",lista)
lista_maior50 = [i for i in lista if i>50]
print("A quantidade maior do que 50 é :",len(quantidade))
###Output
A lista geral é: [41, 20, 57, 99, 68, 59, 17, 3, 23, 8]
A quantidade maior do que 50 é : 6
###Markdown
Questão 14 EnunciadoFaça um programa que sorteie 10 números entre 0 e 100 e imprima:a. o maior número sorteado; b. o menor número sorteado; c. a média dos números sorteados; d. a soma dos números sorteados.
###Code
import random
lista = list(random.sample(range(1,100),10))
print(" A lista sorteado é: ", lista)
print(" O maior número sorteado: ",max(lista))
print(" O menor número sorteado:",min(lista))
print(" A média dos números sorteados:",(sum(int(i) for i in lista)/qtd))
print(" A soma dos números sorteados:",sum(lista))
###Output
A lista sorteado é: [57, 50, 97, 44, 87, 41, 17, 65, 19, 38]
O maior número sorteado: 97
O menor número sorteado: 17
A média dos números sorteados: 103.0
A soma dos números sorteados: 515
###Markdown
Questão 15 EnunciadoDesafio 1 - Faça um programa que peça para o usuário digitar o nome e a idade de um aluno e o número de provas que esse aluno fez. Depois, o programa deve pedir para o usuário digitar as notas de cada prova do aluno. Ao final o programa deve imprimir uma lista contendo:a. Nome do aluno na posição 0; b. Idade do aluno na posição 1; c. Uma lista com todas as notas na posição 2; d. A média do aluno na posição 3; e. True ou False, caso a média seja maior que 5 ou não, na posição 4. Dica: Use o que você fez nos exercícios anteriores para criar esse programa.
###Code
nome = input("Qual o nome do aluno: ")
idade = int(input("Qual a idade do aluno: "))
provas = int(input("Quantas provas ele fez: "))
notas = []
for i in range(0,provas):
numero = float(input("Digite a "+str(i+1)+" nota: "))
notas.append(numero)
media = sum(int(i) for i in notas)/provas
resultado = []
resultado.append(nome)
resultado.append(idade)
resultado.append(notas)
resultado.append(media)
resultado.append(media>5)
for linha in resultado:
print(linha)
###Output
Qual o nome do aluno: claudo
Qual a idade do aluno: 40
Quantas provas ele fez: 1
Digite a 1 nota: 4
claudo
40
[4.0]
4.0
False
###Markdown
Questão 16 EnunciadoDesafio 2 - Faça um programa como o do item anterior, porém que imprima a média sem considerar a maior e menor nota do aluno (nesse caso o número de provas precisa ser obrigatoriamente maior que dois).Dica: crie uma cópia com a lista de todas as notas antes de fazer a média.
###Code
nome = input("Qual o nome do aluno: ")
idade = int(input("Qual a idade do aluno: "))
provas = 0
while provas<=2:
provas = int(input("Quantas provas ele fez, digite no mínimo 3 provas: "))
notas = []
for i in range(0,provas):
numero = float(input("Digite a "+str(i+1)+" nota: "))
notas.append(numero)
notas_original = notas[:]
notas.remove(max(notas))
notas.remove(min(notas))
media = sum(int(i) for i in notas)/provas
resultado = []
resultado.append(nome)
resultado.append(idade)
resultado.append(notas)
resultado.append(media)
resultado.append(media>5)
for linha in resultado:
print(linha)
###Output
Qual o nome do aluno: a
Qual a idade do aluno: 1
Quantas provas ele fez, digine no mínimo 3 notas4
Digite a 1 nota: 2
Digite a 2 nota: 2
Digite a 3 nota: 3
Digite a 4 nota: 3
[2.0, 3.0]
a
1
[2.0, 3.0]
1.25
False
###Markdown
Questão 17 EnunciadoDesafio 3 - Faça um programa que pede para o usuário digitar o CPF e verifica se ele é válido. Para isso, primeiramente o programa deve multiplicar cada um dos 9 primeiros dígitos do CPF pelos números de 10 a 2 e somar todas as respostas. O resultado deve ser multiplicado por 10 e dividido por 11. O resto dessa divisão deve ser igual ao primeiro dígito verificador (10º dígito). Em seguida, o programa deve multiplicar cada um dos 10 primeiros dígitos do CPF pelos números de 11 a 2 e repetir o procedimento anterior para verificar o segundo dígito verificador.Exemplo:Se o CPF for 286.255.878-87 o programa deve fazer primeiro:x = (2 * 10 + 8 * 9 + 6 * 8 + 2 * 7 + 5 * 6 + 5 * 5 + 8 * 4 + 7 * 3 + 8 * 2 )Em seguida, o programa deve testar se x*10%11 == 8 (o décimo número do CPF). Se sim, o programa deve calcular:x = (2 * 11 + 8 * 10 + 6 * 9 + 2 * 8 + 5 * 7 + 5 * 6 + 8 * 5 + 7 * 4 + 8 * 3 + 8 * 2)e verificar se x * 10 % 11 == 7 (o décimo primeiro número do CPF).
###Code
cpf = ''
while len([i for i in cpf if(i.isdigit())]) != 11:
cpf = input("Digite o CPF: ")
cpf = [i for i in cpf if(i.isdigit())]
fator = 10
x = 0
for numero in cpf:
x += (int(numero) * fator)
fator -= 1
if fator < 2:
break
digito = (x*10 % 11)
if digito==10:
digito==0
if digito==int(cpf[9]):
x = 0
fator = 11
for numero in cpf:
x += (int(numero) * fator)
fator -= 1
if fator < 2:
break
digito = (x*10 % 11)
if digito==10:
digito==0
if digito == int(cpf[10]):
print("O cpf é valido")
else:
print("Cpf Inválido")
int(cpf[10])
###Output
_____no_output_____
###Markdown
Questão 1 EnunciadoFaça um programa que peça para o usuário digitar uma palavra e imprima cada letra em uma linha.
###Code
palavra = input("Digite uma palavra: ")
for letra in palavra:
print(letra)
###Output
Digite uma palavra: carro azul
c
a
r
r
o
a
z
u
l
###Markdown
Questão 2 EnunciadoFaça um programa que pede para o usuário digitar uma palavra e cria uma nova string igual, copiando letra por letra a palavra digitada, depois imprima a nova string.
###Code
palavra = input("Digite uma palavra: ")
novapalavra = ""
for letra in palavra:
print(letra)
novapalavra+=letra
print(novapalavra)
###Output
Digite uma palavra: carro
c
a
r
r
o
carro
###Markdown
Questão 3 EnunciadoAltere o exercício anterior para que a string copiada alterne entre letras maiúsculas e minúsculas.Exemplo: se o usuário digitar "latex" o programa deve imprimir "LaTeX".
###Code
palavra = input("Digite uma palavra: ")
novapalavra = ""
upper = 1
for letra in palavra:
print(letra)
if upper==1:
novapalavra+=letra.upper()
upper = 0
else:
novapalavra+=letra.lower()
upper = 1
print(novapalavra)
###Output
Digite uma palavra: latex
l
a
t
e
x
LaTeX
###Markdown
Questão 4 EnunciadoFaça um programa que pede para o usuário digitar uma palavra e cria uma nova string igual, porém com espaço entre cada letra, depois imprima a nova string:Exemplo: se o usuário digitar "python" o programa deve imprimir "p y t h o n "
###Code
palavra = input("Digite uma palavra: ")
novapalavra = ""
for letra in palavra:
novapalavra+=letra+" "
print(novapalavra)
###Output
Digite uma palavra: python
p y t h o n
###Markdown
Questão 5 EnunciadoFaça uma função que receba uma string e retorne uma nova string substituindo:'a' por '4''e' por '3''I' por '1''t' por '7'
###Code
palavra = input("Digite uma palavra: ")
novapalavra = ""
for letra in palavra:
if letra=='a':
letra='4'
elif letra=='e':
letra='3'
elif letra=='i':
letra='1'
elif letra=='t':
letra='7'
novapalavra+=letra+" "
print(novapalavra)
###Output
Digite uma palavra: tatu
7 4 7 u
###Markdown
Questão 6 EnunciadoFaça uma função que recebe uma string e retorna ela ao contrário.Exemplo: Recebe "teste" e retorna "etset".
###Code
palavra = input("Digite uma palavra: ")
novapalavra = ""
for letra in reversed(palavra):
novapalavra+=letra
print(novapalavra)
###Output
Digite uma palavra: teste
etset
###Markdown
Questão 7 EnunciadoAgora faça uma função que recebe uma palavra e diz se ela é um palíndromo, ou seja, se ela é igual a ela mesma ao contrário.Dica: Use a função do exercício 6.
###Code
palavra = input("Digite uma palavra: ")
novapalavra = ""
for letra in reversed(palavra):
novapalavra+=letra
if palavra==novapalavra:
print("A palavra é um palindromo!")
else:
print("A palavra não é um palindromo")
###Output
Digite uma palavra: reviver
A palavra é um palindromo!
###Markdown
Questão 8 EnunciadoFaça uma função que receba um texto e uma palavra, então verifique se a palavra está no texto, retornando True ou False.
###Code
texto = input("Digite um texto: ")
palavra = input("Digite uma palavra: ")
print(palavra in texto)
###Output
Digite um texto: carro azul vermelho
Digite uma palavra: verde
False
###Markdown
Questão 9 EnunciadoFaça uma função que receba uma string que contém tanto números quanto letras e caracteres especiais, e que separe as letras em uma variável e os números em outra (os caracteres especiais podem ser descartados). Ao final a função deve imprimir as duas variáveis.
###Code
texto = input("Digite um texto: ")
numero = []
letras = []
for letra in texto:
if letra.isdigit():
numero.append(letra)
if letra.isalpha():
letras.append(letra)
print("Os numeros são: ",numero)
print("As letras são: ",letras)
###Output
Digite um texto: carro azul 12345 %$
Os numeros são: ['1', '2', '3', '4', '5']
As letras são: ['c', 'a', 'r', 'r', 'o', 'a', 'z', 'u', 'l']
###Markdown
Questão 10 EnunciadoDesafio - Faça uma função que receba uma string e uma letra e: a. imprima quantas vezes a letra aparece na string; b. imprima todas as posições em que a letra aparece na string; c. retorne a distância entre a primeira e a última aparição dessa letra na string.
###Code
texto = input("Digite uma string: ")
quantidade = 0
posicao =[]
i = 0
while i<len(texto):
if texto[i].lower()=='e':
quantidade+=1
posicao.append(i)
i+=1
print("Quantas vezes a letra aparece na string:", quantidade)
print("Posições em que a letra aparece na string: ",posicao)
print("Distância entre a primeira e a última aparição dessa letra na string:",posicao[len(posicao)-1]-posicao[0])
###Output
Digite uma string: ecasae
Quantas vezes a letra aparece na string: 2
Posições em que a letra aparece na string: [0, 5]
Distância entre a primeira e a última aparição dessa letra na string: 5
###Markdown
Questão 11 EnunciadoSuper Desafio! - faça uma função que criptografa uma mensagem substituindo cada letra pela letra oposta do dicionário:'a' por 'z''b' por 'y''c' por 'x'
###Code
alfabeto = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z']
texto = input("Digite um texto: ")
resposta = ''
for letra in texto:
if letra.isalpha():
resposta+=alfabeto[(alfabeto.index(letra.lower())+1)*-1]
else:
resposta+=letra
print(resposta)
###Output
Digite um texto: a casa e azul
z xzhz v zafo
|
pythonA2Z.ipynb | ###Markdown
python is case sensitive
###Code
course="Python for beginners"
print(course.find('y'))
print(course.find('Y'))
print(course.replace('for','4'))
print(course)
print('Pyhton' in course)
#operator precedence
###Output
_____no_output_____
###Markdown
 https://www.programiz.com/python-programming/precedence-associativity WHILE
###Code
i=1
while i<=5:
print(i)
i+=1
i=1
while i<=10:
print(i*'*')
i+=1
###Output
*
**
***
****
*****
******
*******
********
*********
**********
###Markdown
LISTS
###Code
names=["John","Bob","Mosh","Sam","Mary"]
print(names[0])
print(names[-1])
names[0]="Jon"
print(names)
print(names[0:3])
print(names)
###Output
John
Mary
['Jon', 'Bob', 'Mosh', 'Sam', 'Mary']
['Jon', 'Bob', 'Mosh']
['Jon', 'Bob', 'Mosh', 'Sam', 'Mary']
###Markdown
LIST METHODS
###Code
numbers=[1,2,3,4,5]
numbers.append(6)
print(numbers)
print(1 in numbers)
print(len(numbers))
numbers.insert(0, -1)
print(numbers)
numbers.remove(3)
print(numbers)
numbers.clear()
print(numbers)
###Output
[1, 2, 3, 4, 5, 6]
True
6
[-1, 1, 2, 3, 4, 5, 6]
[-1, 1, 2, 4, 5, 6]
[]
###Markdown
FOR LOOP + LIST
###Code
numbers=[1,2,3,4,5]
for i in numbers:
print(i)
print(" ")
i=0
while i < len(numbers):
print(numbers[i])
i+=1
###Output
1
2
3
4
5
1
2
3
4
5
###Markdown
RANGE
###Code
numbers= range(5)
print(numbers)
for number in numbers:
print(number)
###Output
range(0, 5)
0
1
2
3
4
###Markdown
TUPLES(IMMUTABLE)
###Code
numbers=(1,2,3,4,3)
print(numbers.count(3)) #counts how many 3s r there
print(numbers.index(3))
###Output
2
2
|
docs/tutorials/custom_aggregators.ipynb | ###Markdown
Copyright 2021 The TensorFlow Federated Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Implementing Custom Aggregations View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook In this tutorial, we explain design principles behind the `tff.aggregators` module and best practices for implementing custom aggregation of values from clients to server.**Prerequisites.** This tutorial assumes you are already familiar with basic concepts of [Federated Core](https://www.tensorflow.org/federated/federated_core) such as placements (`tff.SERVER`, `tff.CLIENTS`), how TFF represents computations (`tff.tf_computation`, `tff.federated_computation`) and their type signatures.
###Code
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow_federated_nightly
!pip install --quiet --upgrade nest_asyncio
import nest_asyncio
nest_asyncio.apply()
###Output
_____no_output_____
###Markdown
Design summary In TFF, "aggregation" refers to the movement of a set of values on `tff.CLIENTS` to produce an aggregate value of the same type on `tff.SERVER`. That is, each individual client value need not be available. For example in federated learning, client model updates are averaged to get an aggregate model update to apply to the global model on the server.In addition to operators accomplishing this goal such as `tff.federated_sum`, TFF provides `tff.templates.AggregationProcess` (a [stateful process](https://www.tensorflow.org/federated/federated_learningmodeling_state)) which formalizes the type signature for aggregation computation so it can generalize to more complex forms than a simple sum.The main components of the `tff.aggregators` module are *factories* for creation of the `AggregationProcess`, which are designed to be generally useful and replacable building blocks of TFF in two aspects: 1. *Parameterized computations.* Aggregation is an independent building block that can be plugged into other TFF modules designed to work with `tff.aggregators` to parameterize their necessary aggregation.Example: ```learning_process = tff.learning.build_federated_averaging_process( ..., model_update_aggregation_factory=tff.aggregators.MeanFactory())``` 2. *Aggregation composition.* An aggregation building block can be composed with other aggregation building blocks to create more complex composite aggregations.Example: ```secure_mean = tff.aggregators.MeanFactory( value_sum_factory=tff.aggregators.SecureSumFactory(...))``` The rest of this tutorial explains how these two goals are achieved. Aggregation process We first summarize the `tff.templates.AggregationProcess`, and follow with the factory pattern for its creation.The `tff.templates.AggregationProcess` is an `tff.templates.MeasuredProcess` with type signatures specified for aggregation. In particular, the `initialize` and `next` functions have the following type signatures:* `( -> state_type@SERVER)`* `( -> )`The state (of type `state_type`) must be placed at server. The `next` function takes as input argument the state and a value to be aggregated (of type `value_type`) placed at clients. The `*` means optional other input arguments, for instance weights in a weighted mean. It returns an updated state object, the aggregated value of the same type placed at server, and some measurements.Note that both the state to be passed between executions of the `next` function, and the reported measurements intended to report any information depending on a specific execution of the `next` function, may be empty. Nevertheless, they have to be explicitly specified for other parts of TFF to have a clear contract to follow.Other TFF modules, for instance the model updates in `tff.learning`, are expected to use the `tff.templates.AggregationProcess` to parameterize how values are aggregated. However, what exactly are the values aggregated and what their type signatures are, depends on other details of the model being trained and the learning algorithm used to do it.To make aggregation independent of the other aspects of computations, we use the factory pattern -- we create the appropriate `tff.templates.AggregationProcess` once the relevant type signatures of objects to be aggregated are available, by invoking the `create` method of the factory. Direct handling of the aggregation process is thus needed only for library authors, who are responsible for this creation. Aggregation process factories There are two abstract base factory classes for unweighted and weighted aggregation. Their `create` method takes type signatures of value to be aggregated and returns a `tff.templates.AggregationProcess` for aggregation of such values.The process created by `tff.aggregators.UnweightedAggregationFactory` takes two input arguments: (1) state at server and (2) value of specified type `value_type`.An example implementation is `tff.aggregators.SumFactory`.The process created by `tff.aggregators.WeightedAggregationFactory` takes three input arguments: (1) state at server, (2) value of specified type `value_type` and (3) weight of type `weight_type`, as specified by the factory's user when invoking its `create` method.An example implementation is `tff.aggregators.MeanFactory` which computes a weighted mean.The factory pattern is how we achieve the first goal stated above; that aggregation is an independent building block. For example, when changing which model variables are trainable, a complex aggregation does not necessarily need to change; the factory representing it will be invoked with a different type signature when used by a method such as `tff.learning.build_federated_averaging_process`. Compositions Recall that a general aggregation process can encapsulate (a) some preprocessing of the values at clients, (b) movement of values from client to server, and (c) some postprocessing of the aggregated value at the server. The second goal stated above, aggregation composition, is realized inside the `tff.aggregators` module by structuring the implementation of the aggregation factories such that part (b) can be delegated to another aggregation factory.Rather than implementing all necessary logic within a single factory class, the implementations are by default focused on a single aspect relevant for aggregation. When needed, this pattern then enables us to replace the building blocks one at a time.An example is the weighted `tff.aggregators.MeanFactory`. Its implementation multiplies provided values and weights at clients, then sums both weighted values and weights independently, and then divides the sum of weighted values by the sum of weights at the server. Instead of implementing the summations by directly using the `tff.federated_sum` operator, the summation is delegated to two instances of `tff.aggregators.SumFactory`.Such structure makes it possible for the two default summations to be replaced by different factories, which realize the sum differently. For example, a `tff.aggregators.SecureSumFactory`, or a custom implementation of the `tff.aggregators.UnweightedAggregationFactory`. Conversely, time, `tff.aggregators.MeanFactory` can itself be an inner aggregation of another factory such as `tff.aggregators.clipping_factory`, if the values are to be clipped before averaging.See the previous [Tuning recommended aggregations for learning](tuning_recommended_aggregators.ipynb) tutorial for receommended uses of the composition mechanism using existing factories in the `tff.aggregators` module. Best practices by example We are going to illustrate the `tff.aggregators` concepts in detail by implementing a simple example task, and make it progressively more general. Another way to learn is to look at the implementation of existing factories.
###Code
import collections
import tensorflow as tf
import tensorflow_federated as tff
###Output
_____no_output_____
###Markdown
Instead of summing `value`, the example task is to sum `value * 2.0` and then divide the sum by `2.0`. The aggregation result is thus mathematically equivalent to directly summing the `value`, and could be thought of as consisting of three parts: (1) scaling at clients (2) summing across clients (3) unscaling at server. NOTE: This task is not necessarily useful in practice. Nevertheless, it is helpful in explaining the underlying concepts. Following the design explained above, the logic will be implemented as a subclass of `tff.aggregators.UnweightedAggregationFactory`, which creates appropriate `tff.templates.AggregationProcess` when given a `value_type` to aggregate: Minimal implementation For the example task, the computations necessary are always the same, so there is no need for using state. It is thus empty, and represented as `tff.federated_value((), tff.SERVER)`. The same holds for measurements, for now.The minimal implementation of the task is thus as follows:
###Code
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def create(self, value_type):
@tff.federated_computation()
def initialize_fn():
return tff.federated_value((), tff.SERVER)
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
scaled_value = tff.federated_map(
tff.tf_computation(lambda x: x * 2.0), value)
summed_value = tff.federated_sum(scaled_value)
unscaled_value = tff.federated_map(
tff.tf_computation(lambda x: x / 2.0), summed_value)
measurements = tff.federated_value((), tff.SERVER)
return tff.templates.MeasuredProcessOutput(
state=state, result=unscaled_value, measurements=measurements)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
###Output
_____no_output_____
###Markdown
Whether everything works as expected can be verified with the following code:
###Code
client_data = (1.0, 2.0, 5.0)
factory = ExampleTaskFactory()
aggregation_process = factory.create(tff.TensorType(tf.float32))
print(f'Type signatures of the created aggregation process:\n'
f' - initialize: {aggregation_process.initialize.type_signature}\n'
f' - next: {aggregation_process.next.type_signature}\n')
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print(f'Aggregation result: {output.result} (expected 8.0)')
###Output
Type signatures of the created aggregation process:
- initialize: ( -> <>@SERVER)
- next: (<state=<>@SERVER,value={float32}@CLIENTS> -> <state=<>@SERVER,result=float32@SERVER,measurements=<>@SERVER>)
Aggregation result: 8.0 (expected 8.0)
###Markdown
Statefulness and measurements Statefulness is broadly used in TFF to represent computations that are expected to be executed iteratively and change with each iteration. For example, the state of a learning computation contains the weights of the model being learned.To illustrate how to use state in an aggregation computation, we modify the example task. Instead of multiplying `value` by `2.0`, we multiply it by the iteration index - the number of times the aggregation has been executed.To do so, we need a way to keep track of the iteration index, which is achieved through the concept of state. In the `initialize_fn`, instead of creating an empty state, we initialize the state to be a scalar zero. Then, state can be used in the `next_fn` in three steps: (1) increment by `1.0`, (2) use to multiply `value`, and (3) return as the new updated state. Once this is done, you may note: *But exactly the same code as above can be used to verify all works as expected. How do I know something has actually changed?*Good question! This is where the concept of measurements becomes useful. In general, measurements can report any value relevant to a single execution of the `next` function, which could be used for monitoring. In this case, it can be the `summed_value` from the previous example. That is, the value before the "unscaling" step, which should depend on the iteration index. *Again, this is not necessarily useful in practice, but illustrates the relevant mechanism.*The stateful answer to the task thus looks as follows:
###Code
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def create(self, value_type):
@tff.federated_computation()
def initialize_fn():
return tff.federated_value(0.0, tff.SERVER)
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
new_state = tff.federated_map(
tff.tf_computation(lambda x: x + 1.0), state)
state_at_clients = tff.federated_broadcast(new_state)
scaled_value = tff.federated_map(
tff.tf_computation(lambda x, y: x * y), (value, state_at_clients))
summed_value = tff.federated_sum(scaled_value)
unscaled_value = tff.federated_map(
tff.tf_computation(lambda x, y: x / y), (summed_value, new_state))
return tff.templates.MeasuredProcessOutput(
state=new_state, result=unscaled_value, measurements=summed_value)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
###Output
_____no_output_____
###Markdown
Note that the `state` that comes into `next_fn` as input is placed at server. In order to use it at clients, it first needs to be communicated, which is achieved using the `tff.federated_broadcast` operator.To verify all works as expected, we can now look at the reported `measurements`, which should be different with each round of execution, even if run with the same `client_data`.
###Code
client_data = (1.0, 2.0, 5.0)
factory = ExampleTaskFactory()
aggregation_process = factory.create(tff.TensorType(tf.float32))
print(f'Type signatures of the created aggregation process:\n'
f' - initialize: {aggregation_process.initialize.type_signature}\n'
f' - next: {aggregation_process.next.type_signature}\n')
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print('| Round #1')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| Aggregation measurements: {output.measurements} (expected 8.0 * 1)')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #2')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| Aggregation measurements: {output.measurements} (expected 8.0 * 2)')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #3')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| Aggregation measurements: {output.measurements} (expected 8.0 * 3)')
###Output
Type signatures of the created aggregation process:
- initialize: ( -> float32@SERVER)
- next: (<state=float32@SERVER,value={float32}@CLIENTS> -> <state=float32@SERVER,result=float32@SERVER,measurements=float32@SERVER>)
| Round #1
| Aggregation result: 8.0 (expected 8.0)
| Aggregation measurements: 8.0 (expected 8.0 * 1)
| Round #2
| Aggregation result: 8.0 (expected 8.0)
| Aggregation measurements: 16.0 (expected 8.0 * 2)
| Round #3
| Aggregation result: 8.0 (expected 8.0)
| Aggregation measurements: 24.0 (expected 8.0 * 3)
###Markdown
Structured types The model weights of a model trained in federated learning are usually represented as a collection of tensors, rather than a single tensor. In TFF, this is represented as `tff.StructType` and generally useful aggregation factories need to be able to accept the structured types.However, in the above examples, we only worked with a `tff.TensorType` object. If we try to use the previous factory to create the aggregation process with a `tff.StructType([(tf.float32, (2,)), (tf.float32, (3,))])`, we get a strange error because TensorFlow will try to multiply a `tf.Tensor` and a `list`.The problem is that instead of multiplying the structure of tensors by a constant, we need to multiply *each tensor in the structure* by a constant. The usual solution to this problem is to use the `tf.nest` module inside of the created `tff.tf_computation`s.The version of the previous `ExampleTaskFactory` compatible with structured types thus looks as follows:
###Code
@tff.tf_computation()
def scale(value, factor):
return tf.nest.map_structure(lambda x: x * factor, value)
@tff.tf_computation()
def unscale(value, factor):
return tf.nest.map_structure(lambda x: x / factor, value)
@tff.tf_computation()
def add_one(value):
return value + 1.0
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def create(self, value_type):
@tff.federated_computation()
def initialize_fn():
return tff.federated_value(0.0, tff.SERVER)
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
new_state = tff.federated_map(add_one, state)
state_at_clients = tff.federated_broadcast(new_state)
scaled_value = tff.federated_map(scale, (value, state_at_clients))
summed_value = tff.federated_sum(scaled_value)
unscaled_value = tff.federated_map(unscale, (summed_value, new_state))
return tff.templates.MeasuredProcessOutput(
state=new_state, result=unscaled_value, measurements=summed_value)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
###Output
_____no_output_____
###Markdown
This example highlights a pattern which may be useful to follow when structuring TFF code. When not dealing with very simple operations, the code becomes more legible when the `tff.tf_computation`s that will be used as building blocks inside a `tff.federated_computation` are created in a separate place. Inside of the `tff.federated_computation`, these building blocks are only connected using the intrinsic operators. To verify it works as expected:
###Code
client_data = [[[1.0, 2.0], [3.0, 4.0, 5.0]],
[[1.0, 1.0], [3.0, 0.0, -5.0]]]
factory = ExampleTaskFactory()
aggregation_process = factory.create(
tff.to_type([(tf.float32, (2,)), (tf.float32, (3,))]))
print(f'Type signatures of the created aggregation process:\n'
f' - initialize: {aggregation_process.initialize.type_signature}\n'
f' - next: {aggregation_process.next.type_signature}\n')
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print(f'Aggregation result: [{output.result[0]}, {output.result[1]}]\n'
f' Expected: [[2. 3.], [6. 4. 0.]]')
###Output
Type signatures of the created aggregation process:
- initialize: ( -> float32@SERVER)
- next: (<state=float32@SERVER,value={<float32[2],float32[3]>}@CLIENTS> -> <state=float32@SERVER,result=<float32[2],float32[3]>@SERVER,measurements=<float32[2],float32[3]>@SERVER>)
Aggregation result: [[2. 3.], [6. 4. 0.]]
Expected: [[2. 3.], [6. 4. 0.]]
###Markdown
Inner aggregations The final step is to optionally enable delegation of the actual aggregation to other factories, in order to allow easy composition of different aggregation techniques.This is achieved by creating an optional `inner_factory` argument in the constructor of our `ExampleTaskFactory`. If not specified, `tff.aggregators.SumFactory` is used, which applies the `tff.federated_sum` operator used directly in the previous section.When `create` is called, we can first call `create` of the `inner_factory` to create the inner aggregation process with the same `value_type`.The state of our process returned by `initialize_fn` is a composition of two parts: the state created by "this" process, and the state of the just created inner process.The implementation of the `next_fn` differs in that the actual aggregation is delegated to the `next` function of the inner process, and in how the final output is composed. The state is again composed of "this" and "inner" state, and measurements are composed in a similar manner as an `OrderedDict`.The following is an implementation of such pattern.
###Code
@tff.tf_computation()
def scale(value, factor):
return tf.nest.map_structure(lambda x: x * factor, value)
@tff.tf_computation()
def unscale(value, factor):
return tf.nest.map_structure(lambda x: x / factor, value)
@tff.tf_computation()
def add_one(value):
return value + 1.0
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def __init__(self, inner_factory=None):
if inner_factory is None:
inner_factory = tff.aggregators.SumFactory()
self._inner_factory = inner_factory
def create(self, value_type):
inner_process = self._inner_factory.create(value_type)
@tff.federated_computation()
def initialize_fn():
my_state = tff.federated_value(0.0, tff.SERVER)
inner_state = inner_process.initialize()
return tff.federated_zip((my_state, inner_state))
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
my_state, inner_state = state
my_new_state = tff.federated_map(add_one, my_state)
my_state_at_clients = tff.federated_broadcast(my_new_state)
scaled_value = tff.federated_map(scale, (value, my_state_at_clients))
# Delegation to an inner factory, returning values placed at SERVER.
inner_output = inner_process.next(inner_state, scaled_value)
unscaled_value = tff.federated_map(unscale, (inner_output.result, my_new_state))
new_state = tff.federated_zip((my_new_state, inner_output.state))
measurements = tff.federated_zip(
collections.OrderedDict(
scaled_value=inner_output.result,
example_task=inner_output.measurements))
return tff.templates.MeasuredProcessOutput(
state=new_state, result=unscaled_value, measurements=measurements)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
###Output
_____no_output_____
###Markdown
When delegating to the `inner_process.next` function, the return structure we get is a `tff.templates.MeasuredProcessOutput`, with the same three fields - `state`, `result` and `measurements`. When creating the overall return structure of the composed aggregation process, the `state` and `measurements` fields should be generally composed and returned together. In contrast, the `result` field corresponds to the value being aggregated and instead "flows through" the composed aggregation.The `state` object should be seen as an implementation detail of the factory, and thus the composition could be of any structure. However, `measurements` correspond to values to be reported to the user at some point. Therefore, we recommend to use `OrderedDict`, with composed naming such that it would be clear where in an composition does a reported metric comes from.Note also the use of the `tff.federated_zip` operator. The `state` object contolled by the created process should be a `tff.FederatedType`. If we had instead returned `(this_state, inner_state)` in the `initialize_fn`, its return type signature would be a `tff.StructType` containing a 2-tuple of `tff.FederatedType`s. The use of `tff.federated_zip` "lifts" the `tff.FederatedType` to the top level. This is similarly used in the `next_fn` when preparing the state and measurements to be returned. Finally, we can see how this can be used with the default inner aggregation:
###Code
client_data = (1.0, 2.0, 5.0)
factory = ExampleTaskFactory()
aggregation_process = factory.create(tff.TensorType(tf.float32))
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print('| Round #1')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #2')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
###Output
| Round #1
| Aggregation result: 8.0 (expected 8.0)
| measurements['scaled_value']: 8.0
| measurements['example_task']: ()
| Round #2
| Aggregation result: 8.0 (expected 8.0)
| measurements['scaled_value']: 16.0
| measurements['example_task']: ()
###Markdown
... and with a different inner aggregation. For example, an `ExampleTaskFactory`:
###Code
client_data = (1.0, 2.0, 5.0)
# Note the inner delegation can be to any UnweightedAggregaionFactory.
# In this case, each factory creates process that multiplies by the iteration
# index (1, 2, 3, ...), thus their combination multiplies by (1, 4, 9, ...).
factory = ExampleTaskFactory(ExampleTaskFactory())
aggregation_process = factory.create(tff.TensorType(tf.float32))
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print('| Round #1')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #2')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
###Output
| Round #1
| Aggregation result: 8.0 (expected 8.0)
| measurements['scaled_value']: 8.0
| measurements['example_task']: OrderedDict([('scaled_value', 8.0), ('example_task', ())])
| Round #2
| Aggregation result: 8.0 (expected 8.0)
| measurements['scaled_value']: 16.0
| measurements['example_task']: OrderedDict([('scaled_value', 32.0), ('example_task', ())])
###Markdown
Copyright 2021 The TensorFlow Federated Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Implementing Custom Aggregations View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook In this tutorial, we explain design principles behind the `tff.aggregators` module and best practices for implementing custom aggregation of values from clients to server.**Prerequisites.** This tutorial assumes you are already familiar with basic concepts of [Federated Core](https://www.tensorflow.org/federated/federated_core) such as placements (`tff.SERVER`, `tff.CLIENTS`), how TFF represents computations (`tff.tf_computation`, `tff.federated_computation`) and their type signatures.
###Code
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow_federated_nightly
!pip install --quiet --upgrade nest_asyncio
import nest_asyncio
nest_asyncio.apply()
###Output
_____no_output_____
###Markdown
Design summary In TFF, "aggregation" refers to the movement of a set of values on `tff.CLIENTS` to produce an aggregate value of the same type on `tff.SERVER`. That is, each individual client value need not be available. For example in federated learning, client model updates are averaged to get an aggregate model update to apply to the global model on the server.In addition to operators accomplishing this goal such as `tff.federated_sum`, TFF provides `tff.templates.AggregationProcess` (a [stateful process](https://www.tensorflow.org/federated/federated_learningmodeling_state)) which formalizes the type signature for aggregation computation so it can generalize to more complex forms than a simple sum.The main components of the `tff.aggregators` module are *factories* for creation of the `AggregationProcess`, which are designed to be generally useful and replacable building blocks of TFF in two aspects: 1. *Parameterized computations.* Aggregation is an independent building block that can be plugged into other TFF modules designed to work with `tff.aggregators` to parameterize their necessary aggregation.Example: ```learning_process = tff.learning.build_federated_averaging_process( ..., model_update_aggregation_factory=tff.aggregators.MeanFactory())``` 2. *Aggregation composition.* An aggregation building block can be composed with other aggregation building blocks to create more complex composite aggregations.Example: ```secure_mean = tff.aggregators.MeanFactory( value_sum_factory=tff.aggregators.SecureSumFactory(...))``` The rest of this tutorial explains how these two goals are achieved. Aggregation process We first summarize the `tff.templates.AggregationProcess`, and follow with the factory pattern for its creation.The `tff.templates.AggregationProcess` is an `tff.templates.MeasuredProcess` with type signatures specified for aggregation. In particular, the `initialize` and `next` functions have the following type signatures:* `( -> state_type@SERVER)`* `( -> )`The state (of type `state_type`) must be placed at server. The `next` function takes as input argument the state and a value to be aggregated (of type `value_type`) placed at clients. The `*` means optional other input arguments, for instance weights in a weighted mean. It returns an updated state object, the aggregated value of the same type placed at server, and some measurements.Note that both the state to be passed between executions of the `next` function, and the reported measurements intended to report any information depending on a specific execution of the `next` function, may be empty. Nevertheless, they have to be explicitly specified for other parts of TFF to have a clear contract to follow.Other TFF modules, for instance the model updates in `tff.learning`, are expected to use the `tff.templates.AggregationProcess` to parameterize how values are aggregated. However, what exactly are the values aggregated and what their type signatures are, depends on other details of the model being trained and the learning algorithm used to do it.To make aggregation independent of the other aspects of computations, we use the factory pattern -- we create the appropriate `tff.templates.AggregationProcess` once the relevant type signatures of objects to be aggregated are available, by invoking the `create` method of the factory. Direct handling of the aggregation process is thus needed only for library authors, who are responsible for this creation. Aggregation process factories There are two abstract base factory classes for unweighted and weighted aggregation. Their `create` method takes type signatures of value to be aggregated and returns a `tff.templates.AggregationProcess` for aggregation of such values.The process created by `tff.aggregators.UnweightedAggregationFactory` takes two input arguments: (1) state at server and (2) value of specified type `value_type`.An example implementation is `tff.aggregators.SumFactory`.The process created by `tff.aggregators.WeightedAggregationFactory` takes three input arguments: (1) state at server, (2) value of specified type `value_type` and (3) weight of type `weight_type`, as specified by the factory's user when invoking its `create` method.An example implementation is `tff.aggregators.MeanFactory` which computes a weighted mean.The factory pattern is how we achieve the first goal stated above; that aggregation is an independent building block. For example, when changing which model variables are trainable, a complex aggregation does not necessarily need to change; the factory representing it will be invoked with a different type signature when used by a method such as `tff.learning.build_federated_averaging_process`. Compositions Recall that a general aggregation process can encapsulate (a) some preprocessing of the values at clients, (b) movement of values from client to server, and (c) some postprocessing of the aggregated value at the server. The second goal stated above, aggregation composition, is realized inside the `tff.aggregators` module by structuring the implementation of the aggregation factories such that part (b) can be delegated to another aggregation factory.Rather than implementing all necessary logic within a single factory class, the implementations are by default focused on a single aspect relevant for aggregation. When needed, this pattern then enables us to replace the building blocks one at a time.An example is the weighted `tff.aggregators.MeanFactory`. Its implementation multiplies provided values and weights at clients, then sums both weighted values and weights independently, and then divides the sum of weighted values by the sum of weights at the server. Instead of implementing the summations by directly using the `tff.federated_sum` operator, the summation is delegated to two instances of `tff.aggregators.SumFactory`.Such structure makes it possible for the two default summations to be replaced by different factories, which realize the sum differently. For example, a `tff.aggregators.SecureSumFactory`, or a custom implementation of the `tff.aggregators.UnweightedAggregationFactory`. Conversely, time, `tff.aggregators.MeanFactory` can itself be an inner aggregation of another factory such as `tff.aggregators.clipping_factory`, if the values are to be clipped before averaging.See the previous [Tuning recommended aggregations for learning](tuning_recommended_aggregators.ipynb) tutorial for receommended uses of the composition mechanism using existing factories in the `tff.aggregators` module. Best practices by example We are going to illustrate the `tff.aggregators` concepts in detail by implementing a simple example task, and make it progressively more general. Another way to learn is to look at the implementation of existing factories.
###Code
import collections
import tensorflow as tf
import tensorflow_federated as tff
###Output
_____no_output_____
###Markdown
Instead of summing `value`, the example task is to sum `value * 2.0` and then divide the sum by `2.0`. The aggregation result is thus mathematically equivalent to directly summing the `value`, and could be thought of as consisting of three parts: (1) scaling at clients (2) summing across clients (3) unscaling at server. NOTE: This task is not necessarily useful in practice. Nevertheless, it is helpful in explaining the underlying concepts. Following the design explained above, the logic will be implemented as a subclass of `tff.aggregators.UnweightedAggregationFactory`, which creates appropriate `tff.templates.AggregationProcess` when given a `value_type` to aggregate: Minimal implementation For the example task, the computations necessary are always the same, so there is no need for using state. It is thus empty, and represented as `tff.federated_value((), tff.SERVER)`. The same holds for measurements, for now.The minimal implementation of the task is thus as follows:
###Code
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def create(self, value_type):
@tff.federated_computation()
def initialize_fn():
return tff.federated_value((), tff.SERVER)
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
scaled_value = tff.federated_map(
tff.tf_computation(lambda x: x * 2.0), value)
summed_value = tff.federated_sum(scaled_value)
unscaled_value = tff.federated_map(
tff.tf_computation(lambda x: x / 2.0), summed_value)
measurements = tff.federated_value((), tff.SERVER)
return tff.templates.MeasuredProcessOutput(
state=state, result=unscaled_value, measurements=measurements)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
###Output
_____no_output_____
###Markdown
Whether everything works as expected can be verified with the following code:
###Code
client_data = (1.0, 2.0, 5.0)
factory = ExampleTaskFactory()
aggregation_process = factory.create(tff.TensorType(tf.float32))
print(f'Type signatures of the created aggregation process:\n'
f' - initialize: {aggregation_process.initialize.type_signature}\n'
f' - next: {aggregation_process.next.type_signature}\n')
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print(f'Aggregation result: {output.result} (expected 8.0)')
###Output
Type signatures of the created aggregation process:
- initialize: ( -> <>@SERVER)
- next: (<state=<>@SERVER,value={float32}@CLIENTS> -> <state=<>@SERVER,result=float32@SERVER,measurements=<>@SERVER>)
Aggregation result: 8.0 (expected 8.0)
###Markdown
Statefulness and measurements Statefulness is broadly used in TFF to represent computations that are expected to be executed iteratively and change with each iteration. For example, the state of a learning computation contains the weights of the model being learned.To illustrate how to use state in an aggregation computation, we modify the example task. Instead of multiplying `value` by `2.0`, we multiply it by the iteration index - the number of times the aggregation has been executed.To do so, we need a way to keep track of the iteration index, which is achieved through the concept of state. In the `initialize_fn`, instead of creating an empty state, we initialize the state to be a scalar zero. Then, state can be used in the `next_fn` in three steps: (1) increment by `1.0`, (2) use to multiply `value`, and (3) return as the new updated state. Once this is done, you may note: *But exactly the same code as above can be used to verify all works as expected. How do I know something has actually changed?*Good question! This is where the concept of measurements becomes useful. In general, measurements can report any value relevant to a single execution of the `next` function, which could be used for monitoring. In this case, it can be the `summed_value` from the previous example. That is, the value before the "unscaling" step, which should depend on the iteration index. *Again, this is not necessarily useful in practice, but illustrates the relevant mechanism.*The stateful answer to the task thus looks as follows:
###Code
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def create(self, value_type):
@tff.federated_computation()
def initialize_fn():
return tff.federated_value(0.0, tff.SERVER)
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
new_state = tff.federated_map(
tff.tf_computation(lambda x: x + 1.0), state)
state_at_clients = tff.federated_broadcast(new_state)
scaled_value = tff.federated_map(
tff.tf_computation(lambda x, y: x * y), (value, state_at_clients))
summed_value = tff.federated_sum(scaled_value)
unscaled_value = tff.federated_map(
tff.tf_computation(lambda x, y: x / y), (summed_value, new_state))
return tff.templates.MeasuredProcessOutput(
state=new_state, result=unscaled_value, measurements=summed_value)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
###Output
_____no_output_____
###Markdown
Note that the `state` that comes into `next_fn` as input is placed at server. In order to use it at clients, it first needs to be communicated, which is achieved using the `tff.federated_broadcast` operator.To verify all works as expected, we can now look at the reported `measurements`, which should be different with each round of execution, even if run with the same `client_data`.
###Code
client_data = (1.0, 2.0, 5.0)
factory = ExampleTaskFactory()
aggregation_process = factory.create(tff.TensorType(tf.float32))
print(f'Type signatures of the created aggregation process:\n'
f' - initialize: {aggregation_process.initialize.type_signature}\n'
f' - next: {aggregation_process.next.type_signature}\n')
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print('| Round #1')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| Aggregation measurements: {output.measurements} (expected 8.0 * 1)')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #2')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| Aggregation measurements: {output.measurements} (expected 8.0 * 2)')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #3')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| Aggregation measurements: {output.measurements} (expected 8.0 * 3)')
###Output
Type signatures of the created aggregation process:
- initialize: ( -> float32@SERVER)
- next: (<state=float32@SERVER,value={float32}@CLIENTS> -> <state=float32@SERVER,result=float32@SERVER,measurements=float32@SERVER>)
| Round #1
| Aggregation result: 8.0 (expected 8.0)
| Aggregation measurements: 8.0 (expected 8.0 * 1)
| Round #2
| Aggregation result: 8.0 (expected 8.0)
| Aggregation measurements: 16.0 (expected 8.0 * 2)
| Round #3
| Aggregation result: 8.0 (expected 8.0)
| Aggregation measurements: 24.0 (expected 8.0 * 3)
###Markdown
Structured types The model weights of a model trained in federated learning are usually represented as a collection of tensors, rather than a single tensor. In TFF, this is represented as `tff.StructType` and generally useful aggregation factories need to be able to accept the structured types.However, in the above examples, we only worked with a `tff.TensorType` object. If we try to use the previous factory to create the aggregation process with a `tff.StructType([(tf.float32, (2,)), (tf.float32, (3,))])`, we get a strange error because TensorFlow will try to multiply a `tf.Tensor` and a `list`.The problem is that instead of multiplying the structure of tensors by a constant, we need to multiply *each tensor in the structure* by a constant. The usual solution to this problem is to use the `tf.nest` module inside of the created `tff.tf_computation`s.The version of the previous `ExampleTaskFactory` compatible with structured types thus looks as follows:
###Code
@tff.tf_computation()
def scale(value, factor):
return tf.nest.map_structure(lambda x: x * factor, value)
@tff.tf_computation()
def unscale(value, factor):
return tf.nest.map_structure(lambda x: x / factor, value)
@tff.tf_computation()
def add_one(value):
return value + 1.0
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def create(self, value_type):
@tff.federated_computation()
def initialize_fn():
return tff.federated_value(0.0, tff.SERVER)
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
new_state = tff.federated_map(add_one, state)
state_at_clients = tff.federated_broadcast(new_state)
scaled_value = tff.federated_map(scale, (value, state_at_clients))
summed_value = tff.federated_sum(scaled_value)
unscaled_value = tff.federated_map(unscale, (summed_value, new_state))
return tff.templates.MeasuredProcessOutput(
state=new_state, result=unscaled_value, measurements=summed_value)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
###Output
_____no_output_____
###Markdown
This example highlights a pattern which may be useful to follow when structuring TFF code. When not dealing with very simple operations, the code becomes more legible when the `tff.tf_computation`s that will be used as building blocks inside a `tff.federated_computation` are created in a separate place. Inside of the `tff.federated_computation`, these building blocks are only connected using the intrinsic operators. To verify it works as expected:
###Code
client_data = [[[1.0, 2.0], [3.0, 4.0, 5.0]],
[[1.0, 1.0], [3.0, 0.0, -5.0]]]
factory = ExampleTaskFactory()
aggregation_process = factory.create(
tff.to_type([(tf.float32, (2,)), (tf.float32, (3,))]))
print(f'Type signatures of the created aggregation process:\n'
f' - initialize: {aggregation_process.initialize.type_signature}\n'
f' - next: {aggregation_process.next.type_signature}\n')
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print(f'Aggregation result: [{output.result[0]}, {output.result[1]}]\n'
f' Expected: [[2. 3.], [6. 4. 0.]]')
###Output
Type signatures of the created aggregation process:
- initialize: ( -> float32@SERVER)
- next: (<state=float32@SERVER,value={<float32[2],float32[3]>}@CLIENTS> -> <state=float32@SERVER,result=<float32[2],float32[3]>@SERVER,measurements=<float32[2],float32[3]>@SERVER>)
Aggregation result: [[2. 3.], [6. 4. 0.]]
Expected: [[2. 3.], [6. 4. 0.]]
###Markdown
Inner aggregations The final step is to optionally enable delegation of the actual aggregation to other factories, in order to allow easy composition of different aggregation techniques.This is achieved by creating an optional `inner_factory` argument in the constructor of our `ExampleTaskFactory`. If not specified, `tff.aggregators.SumFactory` is used, which applies the `tff.federated_sum` operator used directly in the previous section.When `create` is called, we can first call `create` of the `inner_factory` to create the inner aggregation process with the same `value_type`.The state of our process returned by `initialize_fn` is a composition of two parts: the state created by "this" process, and the state of the just created inner process.The implementation of the `next_fn` differs in that the actual aggregation is delegated to the `next` function of the inner process, and in how the final output is composed. The state is again composed of "this" and "inner" state, and measurements are composed in a similar manner as an `OrderedDict`.The following is an implementation of such pattern.
###Code
@tff.tf_computation()
def scale(value, factor):
return tf.nest.map_structure(lambda x: x * factor, value)
@tff.tf_computation()
def unscale(value, factor):
return tf.nest.map_structure(lambda x: x / factor, value)
@tff.tf_computation()
def add_one(value):
return value + 1.0
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def __init__(self, inner_factory=None):
if inner_factory is None:
inner_factory = tff.aggregators.SumFactory()
self._inner_factory = inner_factory
def create(self, value_type):
inner_process = self._inner_factory.create(value_type)
@tff.federated_computation()
def initialize_fn():
my_state = tff.federated_value(0.0, tff.SERVER)
inner_state = inner_process.initialize()
return tff.federated_zip((my_state, inner_state))
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
my_state, inner_state = state
my_new_state = tff.federated_map(add_one, my_state)
my_state_at_clients = tff.federated_broadcast(my_new_state)
scaled_value = tff.federated_map(scale, (value, my_state_at_clients))
# Delegation to an inner factory, returning values placed at SERVER.
inner_output = inner_process.next(inner_state, scaled_value)
unscaled_value = tff.federated_map(unscale, (inner_output.result, my_new_state))
new_state = tff.federated_zip((my_new_state, inner_output.state))
measurements = tff.federated_zip(
collections.OrderedDict(
scaled_value=inner_output.result,
example_task=inner_output.measurements))
return tff.templates.MeasuredProcessOutput(
state=new_state, result=unscaled_value, measurements=measurements)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
###Output
_____no_output_____
###Markdown
When delegating to the `inner_process.next` function, the return structure we get is a `tff.templates.MeasuredProcessOutput`, with the same three fields - `state`, `result` and `measurements`. When creating the overall return structure of the composed aggregation process, the `state` and `measurements` fields should be generally composed and returned together. In contrast, the `result` field corresponds to the value being aggregated and instead "flows through" the composed aggregation.The `state` object should be seen as an implementation detail of the factory, and thus the composition could be of any structure. However, `measurements` correspond to values to be reported to the user at some point. Therefore, we recommend to use `OrderedDict`, with composed naming such that it would be clear where in an composition does a reported metric comes from.Note also the use of the `tff.federated_zip` operator. The `state` object contolled by the created process should be a `tff.FederatedType`. If we had instead returned `(this_state, inner_state)` in the `initialize_fn`, its return type signature would be a `tff.StructType` containing a 2-tuple of `tff.FederatedType`s. The use of `tff.federated_zip` "lifts" the `tff.FederatedType` to the top level. This is similarly used in the `next_fn` when preparing the state and measurements to be returned. Finally, we can see how this can be used with the default inner aggregation:
###Code
client_data = (1.0, 2.0, 5.0)
factory = ExampleTaskFactory()
aggregation_process = factory.create(tff.TensorType(tf.float32))
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print('| Round #1')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #2')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
###Output
| Round #1
| Aggregation result: 8.0 (expected 8.0)
| measurements['scaled_value']: 8.0
| measurements['example_task']: ()
| Round #2
| Aggregation result: 8.0 (expected 8.0)
| measurements['scaled_value']: 16.0
| measurements['example_task']: ()
###Markdown
... and with a different inner aggregation. For example, an `ExampleTaskFactory`:
###Code
client_data = (1.0, 2.0, 5.0)
# Note the inner delegation can be to any UnweightedAggregaionFactory.
# In this case, each factory creates process that multiplies by the iteration
# index (1, 2, 3, ...), thus their combination multiplies by (1, 4, 9, ...).
factory = ExampleTaskFactory(ExampleTaskFactory())
aggregation_process = factory.create(tff.TensorType(tf.float32))
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print('| Round #1')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #2')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
###Output
| Round #1
| Aggregation result: 8.0 (expected 8.0)
| measurements['scaled_value']: 8.0
| measurements['example_task']: OrderedDict([('scaled_value', 8.0), ('example_task', ())])
| Round #2
| Aggregation result: 8.0 (expected 8.0)
| measurements['scaled_value']: 16.0
| measurements['example_task']: OrderedDict([('scaled_value', 32.0), ('example_task', ())])
###Markdown
Copyright 2021 The TensorFlow Federated Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Implementing Custom Aggregations View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook In this tutorial, we explain design principles behind the `tff.aggregators` module and best practices for implementing custom aggregation of values from clients to server.**Prerequisites.** This tutorial assumes you are already familiar with basic concepts of [Federated Core](https://www.tensorflow.org/federated/federated_core) such as placements (`tff.SERVER`, `tff.CLIENTS`), how TFF represents computations (`tff.tf_computation`, `tff.federated_computation`) and their type signatures.
###Code
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
###Output
_____no_output_____
###Markdown
Design summary In TFF, "aggregation" refers to the movement of a set of values on `tff.CLIENTS` to produce an aggregate value of the same type on `tff.SERVER`. That is, each individual client value need not be available. For example in federated learning, client model updates are averaged to get an aggregate model update to apply to the global model on the server.In addition to operators accomplishing this goal such as `tff.federated_sum`, TFF provides `tff.templates.AggregationProcess` (a [stateful process](https://www.tensorflow.org/federated/federated_learningmodeling_state)) which formalizes the type signature for aggregation computation so it can generalize to more complex forms than a simple sum.The main components of the `tff.aggregators` module are *factories* for creation of the `AggregationProcess`, which are designed to be generally useful and replacable building blocks of TFF in two aspects: 1. *Parameterized computations.* Aggregation is an independent building block that can be plugged into other TFF modules designed to work with `tff.aggregators` to parameterize their necessary aggregation.Example: ```learning_process = tff.learning.build_federated_averaging_process( ..., model_update_aggregation_factory=tff.aggregators.MeanFactory())``` 2. *Aggregation composition.* An aggregation building block can be composed with other aggregation building blocks to create more complex composite aggregations.Example: ```secure_mean = tff.aggregators.MeanFactory( value_sum_factory=tff.aggregators.SecureSumFactory(...))``` The rest of this tutorial explains how these two goals are achieved. Aggregation process We first summarize the `tff.templates.AggregationProcess`, and follow with the factory pattern for its creation.The `tff.templates.AggregationProcess` is an `tff.templates.MeasuredProcess` with type signatures specified for aggregation. In particular, the `initialize` and `next` functions have the following type signatures:* `( -> state_type@SERVER)`* `( -> )`The state (of type `state_type`) must be placed at server. The `next` function takes as input argument the state and a value to be aggregated (of type `value_type`) placed at clients. The `*` means optional other input arguments, for instance weights in a weighted mean. It returns an updated state object, the aggregated value of the same type placed at server, and some measurements.Note that both the state to be passed between executions of the `next` function, and the reported measurements intended to report any information depending on a specific execution of the `next` function, may be empty. Nevertheless, they have to be explicitly specified for other parts of TFF to have a clear contract to follow.Other TFF modules, for instance the model updates in `tff.learning`, are expected to use the `tff.templates.AggregationProcess` to parameterize how values are aggregated. However, what exactly are the values aggregated and what their type signatures are, depends on other details of the model being trained and the learning algorithm used to do it.To make aggregation independent of the other aspects of computations, we use the factory pattern -- we create the appropriate `tff.templates.AggregationProcess` once the relevant type signatures of objects to be aggregated are available, by invoking the `create` method of the factory. Direct handling of the aggregation process is thus needed only for library authors, who are responsible for this creation. Aggregation process factories There are two abstract base factory classes for unweighted and weighted aggregation. Their `create` method takes type signatures of value to be aggregated and returns a `tff.templates.AggregationProcess` for aggregation of such values.The process created by `tff.aggregators.UnweightedAggregationFactory` takes two input arguments: (1) state at server and (2) value of specified type `value_type`.An example implementation is `tff.aggregators.SumFactory`.The process created by `tff.aggregators.WeightedAggregationFactory` takes three input arguments: (1) state at server, (2) value of specified type `value_type` and (3) weight of type `weight_type`, as specified by the factory's user when invoking its `create` method.An example implementation is `tff.aggregators.MeanFactory` which computes a weighted mean.The factory pattern is how we achieve the first goal stated above; that aggregation is an independent building block. For example, when changing which model variables are trainable, a complex aggregation does not necessarily need to change; the factory representing it will be invoked with a different type signature when used by a method such as `tff.learning.build_federated_averaging_process`. Compositions Recall that a general aggregation process can encapsulate (a) some preprocessing of the values at clients, (b) movement of values from client to server, and (c) some postprocessing of the aggregated value at the server. The second goal stated above, aggregation composition, is realized inside the `tff.aggregators` module by structuring the implementation of the aggregation factories such that part (b) can be delegated to another aggregation factory.Rather than implementing all necessary logic within a single factory class, the implementations are by default focused on a single aspect relevant for aggregation. When needed, this pattern then enables us to replace the building blocks one at a time.An example is the weighted `tff.aggregators.MeanFactory`. Its implementation multiplies provided values and weights at clients, then sums both weighted values and weights independently, and then divides the sum of weighted values by the sum of weights at the server. Instead of implementing the summations by directly using the `tff.federated_sum` operator, the summation is delegated to two instances of `tff.aggregators.SumFactory`.Such structure makes it possible for the two default summations to be replaced by different factories, which realize the sum differently. For example, a `tff.aggregators.SecureSumFactory`, or a custom implementation of the `tff.aggregators.UnweightedAggregationFactory`. Conversely, time, `tff.aggregators.MeanFactory` can itself be an inner aggregation of another factory such as `tff.aggregators.clipping_factory`, if the values are to be clipped before averaging.See the previous [Tuning recommended aggregations for learning](tuning_recommended_aggregators.ipynb) tutorial for receommended uses of the composition mechanism using existing factories in the `tff.aggregators` module. Best practices by example We are going to illustrate the `tff.aggregators` concepts in detail by implementing a simple example task, and make it progressively more general. Another way to learn is to look at the implementation of existing factories.
###Code
import collections
import tensorflow as tf
import tensorflow_federated as tff
###Output
_____no_output_____
###Markdown
Instead of summing `value`, the example task is to sum `value * 2.0` and then divide the sum by `2.0`. The aggregation result is thus mathematically equivalent to directly summing the `value`, and could be thought of as consisting of three parts: (1) scaling at clients (2) summing across clients (3) unscaling at server. NOTE: This task is not necessarily useful in practice. Nevertheless, it is helpful in explaining the underlying concepts. Following the design explained above, the logic will be implemented as a subclass of `tff.aggregators.UnweightedAggregationFactory`, which creates appropriate `tff.templates.AggregationProcess` when given a `value_type` to aggregate: Minimal implementation For the example task, the computations necessary are always the same, so there is no need for using state. It is thus empty, and represented as `tff.federated_value((), tff.SERVER)`. The same holds for measurements, for now.The minimal implementation of the task is thus as follows:
###Code
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def create(self, value_type):
@tff.federated_computation()
def initialize_fn():
return tff.federated_value((), tff.SERVER)
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
scaled_value = tff.federated_map(
tff.tf_computation(lambda x: x * 2.0), value)
summed_value = tff.federated_sum(scaled_value)
unscaled_value = tff.federated_map(
tff.tf_computation(lambda x: x / 2.0), summed_value)
measurements = tff.federated_value((), tff.SERVER)
return tff.templates.MeasuredProcessOutput(
state=state, result=unscaled_value, measurements=measurements)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
###Output
_____no_output_____
###Markdown
Whether everything works as expected can be verified with the following code:
###Code
client_data = [1.0, 2.0, 5.0]
factory = ExampleTaskFactory()
aggregation_process = factory.create(tff.TensorType(tf.float32))
print(f'Type signatures of the created aggregation process:\n'
f' - initialize: {aggregation_process.initialize.type_signature}\n'
f' - next: {aggregation_process.next.type_signature}\n')
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print(f'Aggregation result: {output.result} (expected 8.0)')
###Output
Type signatures of the created aggregation process:
- initialize: ( -> <>@SERVER)
- next: (<state=<>@SERVER,value={float32}@CLIENTS> -> <state=<>@SERVER,result=float32@SERVER,measurements=<>@SERVER>)
Aggregation result: 8.0 (expected 8.0)
###Markdown
Statefulness and measurements Statefulness is broadly used in TFF to represent computations that are expected to be executed iteratively and change with each iteration. For example, the state of a learning computation contains the weights of the model being learned.To illustrate how to use state in an aggregation computation, we modify the example task. Instead of multiplying `value` by `2.0`, we multiply it by the iteration index - the number of times the aggregation has been executed.To do so, we need a way to keep track of the iteration index, which is achieved through the concept of state. In the `initialize_fn`, instead of creating an empty state, we initialize the state to be a scalar zero. Then, state can be used in the `next_fn` in three steps: (1) increment by `1.0`, (2) use to multiply `value`, and (3) return as the new updated state. Once this is done, you may note: *But exactly the same code as above can be used to verify all works as expected. How do I know something has actually changed?*Good question! This is where the concept of measurements becomes useful. In general, measurements can report any value relevant to a single execution of the `next` function, which could be used for monitoring. In this case, it can be the `summed_value` from the previous example. That is, the value before the "unscaling" step, which should depend on the iteration index. *Again, this is not necessarily useful in practice, but illustrates the relevant mechanism.*The stateful answer to the task thus looks as follows:
###Code
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def create(self, value_type):
@tff.federated_computation()
def initialize_fn():
return tff.federated_value(0.0, tff.SERVER)
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
new_state = tff.federated_map(
tff.tf_computation(lambda x: x + 1.0), state)
state_at_clients = tff.federated_broadcast(new_state)
scaled_value = tff.federated_map(
tff.tf_computation(lambda x, y: x * y), (value, state_at_clients))
summed_value = tff.federated_sum(scaled_value)
unscaled_value = tff.federated_map(
tff.tf_computation(lambda x, y: x / y), (summed_value, new_state))
return tff.templates.MeasuredProcessOutput(
state=new_state, result=unscaled_value, measurements=summed_value)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
###Output
_____no_output_____
###Markdown
Note that the `state` that comes into `next_fn` as input is placed at server. In order to use it at clients, it first needs to be communicated, which is achieved using the `tff.federated_broadcast` operator.To verify all works as expected, we can now look at the reported `measurements`, which should be different with each round of execution, even if run with the same `client_data`.
###Code
client_data = [1.0, 2.0, 5.0]
factory = ExampleTaskFactory()
aggregation_process = factory.create(tff.TensorType(tf.float32))
print(f'Type signatures of the created aggregation process:\n'
f' - initialize: {aggregation_process.initialize.type_signature}\n'
f' - next: {aggregation_process.next.type_signature}\n')
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print('| Round #1')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| Aggregation measurements: {output.measurements} (expected 8.0 * 1)')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #2')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| Aggregation measurements: {output.measurements} (expected 8.0 * 2)')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #3')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| Aggregation measurements: {output.measurements} (expected 8.0 * 3)')
###Output
Type signatures of the created aggregation process:
- initialize: ( -> float32@SERVER)
- next: (<state=float32@SERVER,value={float32}@CLIENTS> -> <state=float32@SERVER,result=float32@SERVER,measurements=float32@SERVER>)
| Round #1
| Aggregation result: 8.0 (expected 8.0)
| Aggregation measurements: 8.0 (expected 8.0 * 1)
| Round #2
| Aggregation result: 8.0 (expected 8.0)
| Aggregation measurements: 16.0 (expected 8.0 * 2)
| Round #3
| Aggregation result: 8.0 (expected 8.0)
| Aggregation measurements: 24.0 (expected 8.0 * 3)
###Markdown
Structured types The model weights of a model trained in federated learning are usually represented as a collection of tensors, rather than a single tensor. In TFF, this is represented as `tff.StructType` and generally useful aggregation factories need to be able to accept the structured types.However, in the above examples, we only worked with a `tff.TensorType` object. If we try to use the previous factory to create the aggregation process with a `tff.StructType([(tf.float32, (2,)), (tf.float32, (3,))])`, we get a strange error because TensorFlow will try to multiply a `tf.Tensor` and a `list`.The problem is that instead of multiplying the structure of tensors by a constant, we need to multiply *each tensor in the structure* by a constant. The usual solution to this problem is to use the `tf.nest` module inside of the created `tff.tf_computation`s.The version of the previous `ExampleTaskFactory` compatible with structured types thus looks as follows:
###Code
@tff.tf_computation()
def scale(value, factor):
return tf.nest.map_structure(lambda x: x * factor, value)
@tff.tf_computation()
def unscale(value, factor):
return tf.nest.map_structure(lambda x: x / factor, value)
@tff.tf_computation()
def add_one(value):
return value + 1.0
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def create(self, value_type):
@tff.federated_computation()
def initialize_fn():
return tff.federated_value(0.0, tff.SERVER)
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
new_state = tff.federated_map(add_one, state)
state_at_clients = tff.federated_broadcast(new_state)
scaled_value = tff.federated_map(scale, (value, state_at_clients))
summed_value = tff.federated_sum(scaled_value)
unscaled_value = tff.federated_map(unscale, (summed_value, new_state))
return tff.templates.MeasuredProcessOutput(
state=new_state, result=unscaled_value, measurements=summed_value)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
###Output
_____no_output_____
###Markdown
This example highlights a pattern which may be useful to follow when structuring TFF code. When not dealing with very simple operations, the code becomes more legible when the `tff.tf_computation`s that will be used as building blocks inside a `tff.federated_computation` are created in a separate place. Inside of the `tff.federated_computation`, these building blocks are only connected using the intrinsic operators. To verify it works as expected:
###Code
client_data = [[[1.0, 2.0], [3.0, 4.0, 5.0]],
[[1.0, 1.0], [3.0, 0.0, -5.0]]]
factory = ExampleTaskFactory()
aggregation_process = factory.create(
tff.to_type([(tf.float32, (2,)), (tf.float32, (3,))]))
print(f'Type signatures of the created aggregation process:\n'
f' - initialize: {aggregation_process.initialize.type_signature}\n'
f' - next: {aggregation_process.next.type_signature}\n')
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print(f'Aggregation result: [{output.result[0]}, {output.result[1]}]\n'
f' Expected: [[2. 3.], [6. 4. 0.]]')
###Output
Type signatures of the created aggregation process:
- initialize: ( -> float32@SERVER)
- next: (<state=float32@SERVER,value={<float32[2],float32[3]>}@CLIENTS> -> <state=float32@SERVER,result=<float32[2],float32[3]>@SERVER,measurements=<float32[2],float32[3]>@SERVER>)
Aggregation result: [[2. 3.], [6. 4. 0.]]
Expected: [[2. 3.], [6. 4. 0.]]
###Markdown
Inner aggregations The final step is to optionally enable delegation of the actual aggregation to other factories, in order to allow easy composition of different aggregation techniques.This is achieved by creating an optional `inner_factory` argument in the constructor of our `ExampleTaskFactory`. If not specified, `tff.aggregators.SumFactory` is used, which applies the `tff.federated_sum` operator used directly in the previous section.When `create` is called, we can first call `create` of the `inner_factory` to create the inner aggregation process with the same `value_type`.The state of our process returned by `initialize_fn` is a composition of two parts: the state created by "this" process, and the state of the just created inner process.The implementation of the `next_fn` differs in that the actual aggregation is delegated to the `next` function of the inner process, and in how the final output is composed. The state is again composed of "this" and "inner" state, and measurements are composed in a similar manner as an `OrderedDict`.The following is an implementation of such pattern.
###Code
@tff.tf_computation()
def scale(value, factor):
return tf.nest.map_structure(lambda x: x * factor, value)
@tff.tf_computation()
def unscale(value, factor):
return tf.nest.map_structure(lambda x: x / factor, value)
@tff.tf_computation()
def add_one(value):
return value + 1.0
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def __init__(self, inner_factory=None):
if inner_factory is None:
inner_factory = tff.aggregators.SumFactory()
self._inner_factory = inner_factory
def create(self, value_type):
inner_process = self._inner_factory.create(value_type)
@tff.federated_computation()
def initialize_fn():
my_state = tff.federated_value(0.0, tff.SERVER)
inner_state = inner_process.initialize()
return tff.federated_zip((my_state, inner_state))
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
my_state, inner_state = state
my_new_state = tff.federated_map(add_one, my_state)
my_state_at_clients = tff.federated_broadcast(my_new_state)
scaled_value = tff.federated_map(scale, (value, my_state_at_clients))
# Delegation to an inner factory, returning values placed at SERVER.
inner_output = inner_process.next(inner_state, scaled_value)
unscaled_value = tff.federated_map(unscale, (inner_output.result, my_new_state))
new_state = tff.federated_zip((my_new_state, inner_output.state))
measurements = tff.federated_zip(
collections.OrderedDict(
scaled_value=inner_output.result,
example_task=inner_output.measurements))
return tff.templates.MeasuredProcessOutput(
state=new_state, result=unscaled_value, measurements=measurements)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
###Output
_____no_output_____
###Markdown
When delegating to the `inner_process.next` function, the return structure we get is a `tff.templates.MeasuredProcessOutput`, with the same three fields - `state`, `result` and `measurements`. When creating the overall return structure of the composed aggregation process, the `state` and `measurements` fields should be generally composed and returned together. In contrast, the `result` field corresponds to the value being aggregated and instead "flows through" the composed aggregation.The `state` object should be seen as an implementation detail of the factory, and thus the composition could be of any structure. However, `measurements` correspond to values to be reported to the user at some point. Therefore, we recommend to use `OrderedDict`, with composed naming such that it would be clear where in an composition does a reported metric comes from.Note also the use of the `tff.federated_zip` operator. The `state` object contolled by the created process should be a `tff.FederatedType`. If we had instead returned `(this_state, inner_state)` in the `initialize_fn`, its return type signature would be a `tff.StructType` containing a 2-tuple of `tff.FederatedType`s. The use of `tff.federated_zip` "lifts" the `tff.FederatedType` to the top level. This is similarly used in the `next_fn` when preparing the state and measurements to be returned. Finally, we can see how this can be used with the default inner aggregation:
###Code
client_data = [1.0, 2.0, 5.0]
factory = ExampleTaskFactory()
aggregation_process = factory.create(tff.TensorType(tf.float32))
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print('| Round #1')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #2')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
###Output
| Round #1
| Aggregation result: 8.0 (expected 8.0)
| measurements['scaled_value']: 8.0
| measurements['example_task']: ()
| Round #2
| Aggregation result: 8.0 (expected 8.0)
| measurements['scaled_value']: 16.0
| measurements['example_task']: ()
###Markdown
... and with a different inner aggregation. For example, an `ExampleTaskFactory`:
###Code
client_data = [1.0, 2.0, 5.0]
# Note the inner delegation can be to any UnweightedAggregaionFactory.
# In this case, each factory creates process that multiplies by the iteration
# index (1, 2, 3, ...), thus their combination multiplies by (1, 4, 9, ...).
factory = ExampleTaskFactory(ExampleTaskFactory())
aggregation_process = factory.create(tff.TensorType(tf.float32))
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print('| Round #1')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #2')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
###Output
| Round #1
| Aggregation result: 8.0 (expected 8.0)
| measurements['scaled_value']: 8.0
| measurements['example_task']: OrderedDict([('scaled_value', 8.0), ('example_task', ())])
| Round #2
| Aggregation result: 8.0 (expected 8.0)
| measurements['scaled_value']: 16.0
| measurements['example_task']: OrderedDict([('scaled_value', 32.0), ('example_task', ())])
###Markdown
Copyright 2021 The TensorFlow Federated Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Implementing Custom Aggregations View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook In this tutorial, we explain design principles behind the `tff.aggregators` module and best practices for implementing custom aggregation of values from clients to server.**Prerequisites.** This tutorial assumes you are already familiar with basic concepts of [Federated Core](https://www.tensorflow.org/federated/federated_core) such as placements (`tff.SERVER`, `tff.CLIENTS`), how TFF represents computations (`tff.tf_computation`, `tff.federated_computation`) and their type signatures.
###Code
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow_federated_nightly
!pip install --quiet --upgrade nest_asyncio
import nest_asyncio
nest_asyncio.apply()
###Output
_____no_output_____
###Markdown
Design summary In TFF, "aggregation" refers to the movement of a set of values on `tff.CLIENTS` to produce an aggregate value of the same type on `tff.SERVER`. That is, each individual client value need not be available. For example in federated learning, client model updates are averaged to get an aggregate model update to apply to the global model on the server.In addition to operators accomplishing this goal such as `tff.federated_sum`, TFF provides `tff.templates.AggregationProcess` (a [stateful process](https://www.tensorflow.org/federated/federated_learningmodeling_state)) which formalizes the type signature for aggregation computation so it can generalize to more complex forms than a simple sum.The main components of the `tff.aggregators` module are *factories* for creation of the `AggregationProcess`, which are designed to be generally useful and replacable building blocks of TFF in two aspects: 1. *Parameterized computations.* Aggregation is an independent building block that can be plugged into other TFF modules designed to work with `tff.aggregators` to parameterize their necessary aggregation.Example: ```learning_process = tff.learning.build_federated_averaging_process( ..., model_update_aggregation_factory=tff.aggregators.MeanFactory())``` 2. *Aggregation composition.* An aggregation building block can be composed with other aggregation building blocks to create more complex composite aggregations.Example: ```secure_mean = tff.aggregators.MeanFactory( value_sum_factory=tff.aggregators.SecureSumFactory(...))``` The rest of this tutorial explains how these two goals are achieved. Aggregation process We first summarize the `tff.templates.AggregationProcess`, and follow with the factory pattern for its creation.The `tff.templates.AggregationProcess` is an `tff.templates.MeasuredProcess` with type signatures specified for aggregation. In particular, the `initialize` and `next` functions have the following type signatures:* `( -> state_type@SERVER)`* `( -> )`The state (of type `state_type`) must be placed at server. The `next` function takes as input argument the state and a value to be aggregated (of type `value_type`) placed at clients. The `*` means optional other input arguments, for instance weights in a weighted mean. It returns an updated state object, the aggregated value of the same type placed at server, and some measurements.Note that both the state to be passed between executions of the `next` function, and the reported measurements intended to report any information depending on a specific execution of the `next` function, may be empty. Nevertheless, they have to be explicitly specified for other parts of TFF to have a clear contract to follow.Other TFF modules, for instance the model updates in `tff.learning`, are expected to use the `tff.templates.AggregationProcess` to parameterize how values are aggregated. However, what exactly are the values aggregated and what their type signatures are, depends on other details of the model being trained and the learning algorithm used to do it.To make aggregation independent of the other aspects of computations, we use the factory pattern -- we create the appropriate `tff.templates.AggregationProcess` once the relevant type signatures of objects to be aggregated are available, by invoking the `create` method of the factory. Direct handling of the aggregation process is thus needed only for library authors, who are responsible for this creation. Aggregation process factories There are two abstract base factory classes for unweighted and weighted aggregation. Their `create` method takes type signatures of value to be aggregated and returns a `tff.templates.AggregationProcess` for aggregation of such values.The process created by `tff.aggregators.UnweightedAggregationFactory` takes two input arguments: (1) state at server and (2) value of specified type `value_type`.An example implementation is `tff.aggregators.SumFactory`.The process created by `tff.aggregators.WeightedAggregationFactory` takes three input arguments: (1) state at server, (2) value of specified type `value_type` and (3) weight of type `weight_type`, as specified by the factory's user when invoking its `create` method.An example implementation is `tff.aggregators.MeanFactory` which computes a weighted mean.The factory pattern is how we achieve the first goal stated above; that aggregation is an independent building block. For example, when changing which model variables are trainable, a complex aggregation does not necessarily need to change; the factory representing it will be invoked with a different type signature when used by a method such as `tff.learning.build_federated_averaging_process`. Compositions Recall that a general aggregation process can encapsulate (a) some preprocessing of the values at clients, (b) movement of values from client to server, and (c) some postprocessing of the aggregated value at the server. The second goal stated above, aggregation composition, is realized inside the `tff.aggregators` module by structuring the implementation of the aggregation factories such that part (b) can be delegated to another aggregation factory.Rather than implementing all necessary logic within a single factory class, the implementations are by default focused on a single aspect relevant for aggregation. When needed, this pattern then enables us to replace the building blocks one at a time.An example is the weighted `tff.aggregators.MeanFactory`. Its implementation multiplies provided values and weights at clients, then sums both weighted values and weights independently, and then divides the sum of weighted values by the sum of weights at the server. Instead of implementing the summations by directly using the `tff.federated_sum` operator, the summation is delegated to two instances of `tff.aggregators.SumFactory`.Such structure makes it possible for the two default summations to be replaced by different factories, which realize the sum differently. For example, a `tff.aggregators.SecureSumFactory`, or a custom implementation of the `tff.aggregators.UnweightedAggregationFactory`. Conversely, time, `tff.aggregators.MeanFactory` can itself be an inner aggregation of another factory such as `tff.aggregators.clipping_factory`, if the values are to be clipped before averaging.See the previous [Tuning recommended aggregations for learning](tuning_recommended_aggregators.ipynb) tutorial for receommended uses of the composition mechanism using existing factories in the `tff.aggregators` module. Best practices by example We are going to illustrate the `tff.aggregators` concepts in detail by implementing a simple example task, and make it progressively more general. Another way to learn is to look at the implementation of existing factories.
###Code
import collections
import tensorflow as tf
import tensorflow_federated as tff
###Output
_____no_output_____
###Markdown
Instead of summing `value`, the example task is to sum `value * 2.0` and then divide the sum by `2.0`. The aggregation result is thus mathematically equivalent to directly summing the `value`, and could be thought of as consisting of three parts: (1) scaling at clients (2) summing across clients (3) unscaling at server. NOTE: This task is not necessarily useful in practice. Nevertheless, it is helpful in explaining the underlying concepts. Following the design explained above, the logic will be implemented as a subclass of `tff.aggregators.UnweightedAggregationFactory`, which creates appropriate `tff.templates.AggregationProcess` when given a `value_type` to aggregate: Minimal implementation For the example task, the computations necessary are always the same, so there is no need for using state. It is thus empty, and represented as `tff.federated_value((), tff.SERVER)`. The same holds for measurements, for now.The minimal implementation of the task is thus as follows:
###Code
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def create(self, value_type):
@tff.federated_computation()
def initialize_fn():
return tff.federated_value((), tff.SERVER)
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
scaled_value = tff.federated_map(
tff.tf_computation(lambda x: x * 2.0), value)
summed_value = tff.federated_sum(scaled_value)
unscaled_value = tff.federated_map(
tff.tf_computation(lambda x: x / 2.0), summed_value)
measurements = tff.federated_value((), tff.SERVER)
return tff.templates.MeasuredProcessOutput(
state=state, result=unscaled_value, measurements=measurements)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
###Output
_____no_output_____
###Markdown
Whether everything works as expected can be verified with the following code:
###Code
client_data = [1.0, 2.0, 5.0]
factory = ExampleTaskFactory()
aggregation_process = factory.create(tff.TensorType(tf.float32))
print(f'Type signatures of the created aggregation process:\n'
f' - initialize: {aggregation_process.initialize.type_signature}\n'
f' - next: {aggregation_process.next.type_signature}\n')
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print(f'Aggregation result: {output.result} (expected 8.0)')
###Output
Type signatures of the created aggregation process:
- initialize: ( -> <>@SERVER)
- next: (<state=<>@SERVER,value={float32}@CLIENTS> -> <state=<>@SERVER,result=float32@SERVER,measurements=<>@SERVER>)
Aggregation result: 8.0 (expected 8.0)
###Markdown
Statefulness and measurements Statefulness is broadly used in TFF to represent computations that are expected to be executed iteratively and change with each iteration. For example, the state of a learning computation contains the weights of the model being learned.To illustrate how to use state in an aggregation computation, we modify the example task. Instead of multiplying `value` by `2.0`, we multiply it by the iteration index - the number of times the aggregation has been executed.To do so, we need a way to keep track of the iteration index, which is achieved through the concept of state. In the `initialize_fn`, instead of creating an empty state, we initialize the state to be a scalar zero. Then, state can be used in the `next_fn` in three steps: (1) increment by `1.0`, (2) use to multiply `value`, and (3) return as the new updated state. Once this is done, you may note: *But exactly the same code as above can be used to verify all works as expected. How do I know something has actually changed?*Good question! This is where the concept of measurements becomes useful. In general, measurements can report any value relevant to a single execution of the `next` function, which could be used for monitoring. In this case, it can be the `summed_value` from the previous example. That is, the value before the "unscaling" step, which should depend on the iteration index. *Again, this is not necessarily useful in practice, but illustrates the relevant mechanism.*The stateful answer to the task thus looks as follows:
###Code
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def create(self, value_type):
@tff.federated_computation()
def initialize_fn():
return tff.federated_value(0.0, tff.SERVER)
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
new_state = tff.federated_map(
tff.tf_computation(lambda x: x + 1.0), state)
state_at_clients = tff.federated_broadcast(new_state)
scaled_value = tff.federated_map(
tff.tf_computation(lambda x, y: x * y), (value, state_at_clients))
summed_value = tff.federated_sum(scaled_value)
unscaled_value = tff.federated_map(
tff.tf_computation(lambda x, y: x / y), (summed_value, new_state))
return tff.templates.MeasuredProcessOutput(
state=new_state, result=unscaled_value, measurements=summed_value)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
###Output
_____no_output_____
###Markdown
Note that the `state` that comes into `next_fn` as input is placed at server. In order to use it at clients, it first needs to be communicated, which is achieved using the `tff.federated_broadcast` operator.To verify all works as expected, we can now look at the reported `measurements`, which should be different with each round of execution, even if run with the same `client_data`.
###Code
client_data = [1.0, 2.0, 5.0]
factory = ExampleTaskFactory()
aggregation_process = factory.create(tff.TensorType(tf.float32))
print(f'Type signatures of the created aggregation process:\n'
f' - initialize: {aggregation_process.initialize.type_signature}\n'
f' - next: {aggregation_process.next.type_signature}\n')
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print('| Round #1')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| Aggregation measurements: {output.measurements} (expected 8.0 * 1)')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #2')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| Aggregation measurements: {output.measurements} (expected 8.0 * 2)')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #3')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| Aggregation measurements: {output.measurements} (expected 8.0 * 3)')
###Output
Type signatures of the created aggregation process:
- initialize: ( -> float32@SERVER)
- next: (<state=float32@SERVER,value={float32}@CLIENTS> -> <state=float32@SERVER,result=float32@SERVER,measurements=float32@SERVER>)
| Round #1
| Aggregation result: 8.0 (expected 8.0)
| Aggregation measurements: 8.0 (expected 8.0 * 1)
| Round #2
| Aggregation result: 8.0 (expected 8.0)
| Aggregation measurements: 16.0 (expected 8.0 * 2)
| Round #3
| Aggregation result: 8.0 (expected 8.0)
| Aggregation measurements: 24.0 (expected 8.0 * 3)
###Markdown
Structured types The model weights of a model trained in federated learning are usually represented as a collection of tensors, rather than a single tensor. In TFF, this is represented as `tff.StructType` and generally useful aggregation factories need to be able to accept the structured types.However, in the above examples, we only worked with a `tff.TensorType` object. If we try to use the previous factory to create the aggregation process with a `tff.StructType([(tf.float32, (2,)), (tf.float32, (3,))])`, we get a strange error because TensorFlow will try to multiply a `tf.Tensor` and a `list`.The problem is that instead of multiplying the structure of tensors by a constant, we need to multiply *each tensor in the structure* by a constant. The usual solution to this problem is to use the `tf.nest` module inside of the created `tff.tf_computation`s.The version of the previous `ExampleTaskFactory` compatible with structured types thus looks as follows:
###Code
@tff.tf_computation()
def scale(value, factor):
return tf.nest.map_structure(lambda x: x * factor, value)
@tff.tf_computation()
def unscale(value, factor):
return tf.nest.map_structure(lambda x: x / factor, value)
@tff.tf_computation()
def add_one(value):
return value + 1.0
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def create(self, value_type):
@tff.federated_computation()
def initialize_fn():
return tff.federated_value(0.0, tff.SERVER)
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
new_state = tff.federated_map(add_one, state)
state_at_clients = tff.federated_broadcast(new_state)
scaled_value = tff.federated_map(scale, (value, state_at_clients))
summed_value = tff.federated_sum(scaled_value)
unscaled_value = tff.federated_map(unscale, (summed_value, new_state))
return tff.templates.MeasuredProcessOutput(
state=new_state, result=unscaled_value, measurements=summed_value)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
###Output
_____no_output_____
###Markdown
This example highlights a pattern which may be useful to follow when structuring TFF code. When not dealing with very simple operations, the code becomes more legible when the `tff.tf_computation`s that will be used as building blocks inside a `tff.federated_computation` are created in a separate place. Inside of the `tff.federated_computation`, these building blocks are only connected using the intrinsic operators. To verify it works as expected:
###Code
client_data = [[[1.0, 2.0], [3.0, 4.0, 5.0]],
[[1.0, 1.0], [3.0, 0.0, -5.0]]]
factory = ExampleTaskFactory()
aggregation_process = factory.create(
tff.to_type([(tf.float32, (2,)), (tf.float32, (3,))]))
print(f'Type signatures of the created aggregation process:\n'
f' - initialize: {aggregation_process.initialize.type_signature}\n'
f' - next: {aggregation_process.next.type_signature}\n')
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print(f'Aggregation result: [{output.result[0]}, {output.result[1]}]\n'
f' Expected: [[2. 3.], [6. 4. 0.]]')
###Output
Type signatures of the created aggregation process:
- initialize: ( -> float32@SERVER)
- next: (<state=float32@SERVER,value={<float32[2],float32[3]>}@CLIENTS> -> <state=float32@SERVER,result=<float32[2],float32[3]>@SERVER,measurements=<float32[2],float32[3]>@SERVER>)
Aggregation result: [[2. 3.], [6. 4. 0.]]
Expected: [[2. 3.], [6. 4. 0.]]
###Markdown
Inner aggregations The final step is to optionally enable delegation of the actual aggregation to other factories, in order to allow easy composition of different aggregation techniques.This is achieved by creating an optional `inner_factory` argument in the constructor of our `ExampleTaskFactory`. If not specified, `tff.aggregators.SumFactory` is used, which applies the `tff.federated_sum` operator used directly in the previous section.When `create` is called, we can first call `create` of the `inner_factory` to create the inner aggregation process with the same `value_type`.The state of our process returned by `initialize_fn` is a composition of two parts: the state created by "this" process, and the state of the just created inner process.The implementation of the `next_fn` differs in that the actual aggregation is delegated to the `next` function of the inner process, and in how the final output is composed. The state is again composed of "this" and "inner" state, and measurements are composed in a similar manner as an `OrderedDict`.The following is an implementation of such pattern.
###Code
@tff.tf_computation()
def scale(value, factor):
return tf.nest.map_structure(lambda x: x * factor, value)
@tff.tf_computation()
def unscale(value, factor):
return tf.nest.map_structure(lambda x: x / factor, value)
@tff.tf_computation()
def add_one(value):
return value + 1.0
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def __init__(self, inner_factory=None):
if inner_factory is None:
inner_factory = tff.aggregators.SumFactory()
self._inner_factory = inner_factory
def create(self, value_type):
inner_process = self._inner_factory.create(value_type)
@tff.federated_computation()
def initialize_fn():
my_state = tff.federated_value(0.0, tff.SERVER)
inner_state = inner_process.initialize()
return tff.federated_zip((my_state, inner_state))
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
my_state, inner_state = state
my_new_state = tff.federated_map(add_one, my_state)
my_state_at_clients = tff.federated_broadcast(my_new_state)
scaled_value = tff.federated_map(scale, (value, my_state_at_clients))
# Delegation to an inner factory, returning values placed at SERVER.
inner_output = inner_process.next(inner_state, scaled_value)
unscaled_value = tff.federated_map(unscale, (inner_output.result, my_new_state))
new_state = tff.federated_zip((my_new_state, inner_output.state))
measurements = tff.federated_zip(
collections.OrderedDict(
scaled_value=inner_output.result,
example_task=inner_output.measurements))
return tff.templates.MeasuredProcessOutput(
state=new_state, result=unscaled_value, measurements=measurements)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
###Output
_____no_output_____
###Markdown
When delegating to the `inner_process.next` function, the return structure we get is a `tff.templates.MeasuredProcessOutput`, with the same three fields - `state`, `result` and `measurements`. When creating the overall return structure of the composed aggregation process, the `state` and `measurements` fields should be generally composed and returned together. In contrast, the `result` field corresponds to the value being aggregated and instead "flows through" the composed aggregation.The `state` object should be seen as an implementation detail of the factory, and thus the composition could be of any structure. However, `measurements` correspond to values to be reported to the user at some point. Therefore, we recommend to use `OrderedDict`, with composed naming such that it would be clear where in an composition does a reported metric comes from.Note also the use of the `tff.federated_zip` operator. The `state` object contolled by the created process should be a `tff.FederatedType`. If we had instead returned `(this_state, inner_state)` in the `initialize_fn`, its return type signature would be a `tff.StructType` containing a 2-tuple of `tff.FederatedType`s. The use of `tff.federated_zip` "lifts" the `tff.FederatedType` to the top level. This is similarly used in the `next_fn` when preparing the state and measurements to be returned. Finally, we can see how this can be used with the default inner aggregation:
###Code
client_data = [1.0, 2.0, 5.0]
factory = ExampleTaskFactory()
aggregation_process = factory.create(tff.TensorType(tf.float32))
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print('| Round #1')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #2')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
###Output
| Round #1
| Aggregation result: 8.0 (expected 8.0)
| measurements['scaled_value']: 8.0
| measurements['example_task']: ()
| Round #2
| Aggregation result: 8.0 (expected 8.0)
| measurements['scaled_value']: 16.0
| measurements['example_task']: ()
###Markdown
... and with a different inner aggregation. For example, an `ExampleTaskFactory`:
###Code
client_data = [1.0, 2.0, 5.0]
# Note the inner delegation can be to any UnweightedAggregaionFactory.
# In this case, each factory creates process that multiplies by the iteration
# index (1, 2, 3, ...), thus their combination multiplies by (1, 4, 9, ...).
factory = ExampleTaskFactory(ExampleTaskFactory())
aggregation_process = factory.create(tff.TensorType(tf.float32))
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print('| Round #1')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #2')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
###Output
| Round #1
| Aggregation result: 8.0 (expected 8.0)
| measurements['scaled_value']: 8.0
| measurements['example_task']: OrderedDict([('scaled_value', 8.0), ('example_task', ())])
| Round #2
| Aggregation result: 8.0 (expected 8.0)
| measurements['scaled_value']: 16.0
| measurements['example_task']: OrderedDict([('scaled_value', 32.0), ('example_task', ())])
|
notebooks/name_parse/build_profile_events-Copy1.ipynb | ###Markdown
Run time with Po1 load()
###Code
%%time
load_profiles()
###Output
created example config file at: /Users/wmcabee/.config/nbc_analysis/extracts.yaml
>> writing record, relation count=11
>> writing record, relation count=11
>> writing record, relation count=11
>> writing record, relation count=11
>> writing record, relation count=11
>> writing record, relation count=11
>> writing record, relation count=11
>> writing record, relation count=11
>> writing record, relation count=11
>> writing record, relation count=11
CPU times: user 372 ms, sys: 32.7 ms, total: 405 ms
Wall time: 14 s
###Markdown
Run time without Po1 load
###Code
%%time
df = load_profiles(skip_po1=True)
dfs = DataFrameSummary(df)
dfs.columns_stats.T
###Output
_____no_output_____ |
examples/Playing with matplotlib.ipynb | ###Markdown
Playing with matplotlib Variable declarations DEM_filepath – path to DEM raster sample_points_filepath – path to sample points shapefile
###Code
DEM_filepath = ""
sample_points_filepath = ""
###Output
_____no_output_____
###Markdown
Import statements
###Code
import matplotlib.pylab as plt
%matplotlib inline
import rasterio
import fiona
###Output
_____no_output_____
###Markdown
Examples
###Code
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
plt.show()
with rasterio.drivers():
with rasterio.open(DEM_filepath) as source_dem:
array_dem = source_dem.read(1)
source_dem.close()
plt.imshow(array_dem)
plt.ylabel("pixels")
with fiona.open(sample_points_filepath, 'r') as source_points:
points = [f['geometry']['coordinates'] for f in source_points]
#plt.figure()
for f in source_points:
x, y = f['geometry']['coordinates']
plt.plot(x, y, 'ro')
plt.show()
source_points.close()
###Output
_____no_output_____ |
VariationalAutoEncoder.ipynb | ###Markdown
Variational AutoencoderIn this notebook we implement a basic test of an Variational Autoencoder (VAE). The variational autoencoder is used to learn the distribution of yield curves.The notebook is structured as follows: - Generate input yield curves from a Hull White model. - Setup a VAE based on the [TensorFlow tutorial](https://www.tensorflow.org/tutorials/generative/cvae).. - Train the VAE to the Hull White yield curves and test the model. - Save and plot the model.
###Code
import numpy as np
import matplotlib.pyplot as plt
from tqdm.keras import TqdmCallback
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Hull White Yield CurvesWe model yield curves in terms of *continuous compounded zero rates*. A zero rate yield curve is a function $z:[0,\infty)\times[0,\infty) \rightarrow \mathbb{R}$. For a given observation time $t\geq 0$ and maturity time $T\geq t$ the zero rate $z(t,T)$ gives a zero coupon bond price (or discount factor) $P(t,T)$ via the relation$$ P(t,T) = e^{-z(t,T)(T-t)}.$$Equivalently, we can calculate the zero rate from a zero coupon bond price as$$ z(t,T) = - \frac{\log\left( P(t,T) \right)}{T-t}.$$In Hull White model the zero bond prices can be reconstructed from a Gaussian state variable $x_t$ and$$ P(t,T) = \frac{P(0,T)}{P(0,t)} e^{-G(t,T)x_t - \frac{1}{2}G(t,T)^2y(t)}.$$Here, $G(t,T)$ and $y(t)$ are model functions given as$$ G(t,T) = \frac{1}{a}\left[1 - e^{-a(T-t)}\right]$$and$$ y(t) = \int_0^t \left[e^{-a(t-u)} \sigma(u) \right]^2 du = \frac{1}{2a}\left[1 - e^{-2at}\right]\sigma^2.$$Model parameters are mean reversion $a$ and short rate volatility $\sigma(t)=\sigma$.Consequently, yield curves can be represented as$$ z(t,T) = \frac{G(t,T)}{T-t} x_t + \frac{1}{2} \frac{G(t,T)^2y(t)}{T-t} + \left[ z(0,T) - z(0,t) \right].$$
###Code
class HullWhiteModel:
def __init__(self, mean_reversion, volatility, zeroYieldCurve=None):
self.mean_reversion = mean_reversion
self.volatility = volatility
self.zeroYieldCurve = zeroYieldCurve
def G(self, t,T):
return (1 - np.exp(-self.mean_reversion*(T-t))) / self.mean_reversion
def y(self, t):
return self.volatility**2 * (1 - np.exp(-2*self.mean_reversion*(t))) / 2 / self.mean_reversion
def zeroRate(self, x, t, T):
G = self.G(t,T)
z = G / (T-t) * x + 0.5 * G**2 * self.y(t) / (T-t)
if self.zeroYieldCurve is not None:
z += self.zeroYieldCurve(T) - self.zeroYieldCurve(t)
return z
def yieldCurves(self, t, delta_T, num_samples):
"""
Simulate yield curves from 0 to t using the model parameters.
State variables are simulated in t-forward measure.
Arguments:
t ... future observation time
delta_T ... array of time offsets to calculate z(t, t + delta_T)
num_samples ... number of yield curve samples calculated
Returns:
A an array of shape (num_samples, len(delta_T)) containing
simulated zero rates z(t, t + delta_T).
"""
x = np.random.normal(size=(num_samples,1)) * np.sqrt(self.y(t))
return self.zeroRate(x, t, t + delta_T)
###Output
_____no_output_____
###Markdown
We define a utility function to consistently plot curves.
###Code
def plot_yieldCurves(curves, N=10):
plt.Figure(figsize=(6,4))
N = np.minimum(N, curves.shape[0])
for yc in curves[:N]:
plt.plot(delta_T, yc)
plt.xlabel('maturity times $T-t$')
plt.ylabel('zero rate $z(t,T)$')
plt.title('simulated curves for $t=%.1f$ ($a=%.1f$%%, $\sigma=%.1f$bp)' % (t, model.mean_reversion*1e2, model.volatility*1e4))
plt.show()
#
print('Shape: ' + str(curves.shape))
###Output
_____no_output_____
###Markdown
We set up a simple Hull White model and simulate future yield curves.
###Code
model = HullWhiteModel(0.15, 0.0075) # 15% mean reversion (fairly high) and 75bp vol
t = 10.0 # horizon in 10y
delta_T = np.array([1.0/365, 0.5, 1.0, 2.0, 3.0, 5.0, 7.0, 10.0, 15.0, 20.0]) # a typical curve grid
num_samples = 2**10
yieldCurves = model.yieldCurves(10.0, delta_T, num_samples)
plot_yieldCurves(yieldCurves)
###Output
_____no_output_____
###Markdown
Variational Autoencoder using KerasWe setup a VAE implementation using Keras, see [TensorFlow tutorial](https://www.tensorflow.org/tutorials/generative/cvae).
###Code
class VariationalAutoencoder(tf.keras.Model):
"""A variational autoencoder as a Keras model."""
def __init__(self, input_dim, hidden_dim, latent_dim, alpha=0.01):
super().__init__()
self.input_dim = input_dim # number of inputs and outputs flattened as vector
self.hidden_dim = hidden_dim # number of hidden nodes
self.latent_dim = latent_dim # number of latent variables, i.e. dimensionality of latent space
self.alpha = alpha # convex combination of to minimize reconstruction (0) or latent distribution (1)
#
lrelu = tf.keras.layers.LeakyReLU(alpha=0.3) # functor for activation function
#
self.encoder = tf.keras.Sequential( [
tf.keras.layers.InputLayer(input_shape=(self.input_dim)),
tf.keras.layers.Dense(self.hidden_dim, activation=lrelu),
tf.keras.layers.Dense(2 * self.latent_dim, activation=lrelu), # mu, logvar
] )
self.decoder = tf.keras.Sequential( [
tf.keras.layers.InputLayer(input_shape=(self.latent_dim)),
tf.keras.layers.Dense(self.hidden_dim, activation=lrelu),
tf.keras.layers.Dense(self.input_dim, activation=tf.keras.activations.linear),
] )
def encode(self, x):
mean, logvar = tf.split(self.encoder(x), num_or_size_splits=2, axis=1)
return mean, logvar
def reparameterize(self, mean, logvar):
eps = tf.random.normal(shape=tf.shape(mean))
return eps * tf.exp(logvar * .5) + mean
def decode(self, z):
return self.decoder(z)
def call(self, inputs):
"""
Specify model output calculation for training.
This function is overloaded from tf.keras.Model.
"""
mean, logvar = self.encode(inputs)
z = self.reparameterize(mean, logvar)
x_out = self.decode(z)
return tf.concat([x_out, mean, logvar], axis=1)
def lossfunction(self, y_true, y_pred, sample_weight=None):
"""
Specify the objective function for optimisation.
This function is input to tf.keras.Model.compile(...)
"""
y = tf.cast(y_true, tf.float32)
x_out = y_pred[:, : -2*self.latent_dim ]
mean = y_pred[:, -2*self.latent_dim : -self.latent_dim ]
logvar = y_pred[:, -self.latent_dim : ]
#
decoded_loss = tf.reduce_sum(tf.math.squared_difference(x_out, y), 1)
latent_loss = 0.5 * tf.reduce_sum(tf.exp(logvar) + tf.square(mean) - 1. - logvar, 1) # https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence#Multivariate_normal_distributions
loss = tf.reduce_mean((1 - self.alpha) * decoded_loss + self.alpha * latent_loss)
return loss
def sample(self, n_samples = 10, randoms=None):
"""
Calculate a sample of observations from the model.
"""
if randoms is None: # we do need to sample
randoms = tf.random.normal(shape=(n_samples, self.latent_dim))
return self.decode(randoms)
def functional_model(self):
"""
Return a standard tf.keras.Model via Functional API.
The resulting model can be used to plot the architecture.
"""
x = tf.keras.Input(shape=(self.input_dim))
return tf.keras.Model(inputs=[x], outputs=self.call(x))
###Output
_____no_output_____
###Markdown
Model Taining and TestingNow, we can setup a model.
###Code
vae_model = VariationalAutoencoder(input_dim=yieldCurves.shape[1], hidden_dim=yieldCurves.shape[1], latent_dim=1, alpha=0.5*1e-4)
optimizer = tf.keras.optimizers.Adam(learning_rate=0.005)
vae_model.compile(optimizer=optimizer, loss=vae_model.lossfunction)
###Output
_____no_output_____
###Markdown
The model is trained using the curves generated from the (analytic) Hull White model.
###Code
vae_model.fit(x=yieldCurves, y=yieldCurves, epochs=100, callbacks=[TqdmCallback(verbose=0)], verbose=0)
yieldCurves_vae2 = vae_model.sample(10)
plot_yieldCurves(yieldCurves_vae2)
###Output
_____no_output_____
###Markdown
Save and Plot a ModelIn this section we explore functionality to save and plot Keras models.
###Code
model_folder_name = 'HullWhiteCurveVae'
vae_model.save(model_folder_name)
#
reconstructed_model = tf.keras.models.load_model(model_folder_name,
custom_objects={
'VariationalAutoencoder': VariationalAutoencoder,
'lossfunction' : VariationalAutoencoder.lossfunction,
}
)
###Output
_____no_output_____
###Markdown
We loose the custom functions like *VariationalAutoencoder.sample(...)* in the reconstructed model. Nevertheless, we can still access the attributes. And the *decoder* is all we need to generate samples.
###Code
def sample(model, latent_dim, n_samples = 10, randoms=None):
"""
Calculate a sample of observations from the model.
"""
if randoms is None: # we do need to sample
randoms = tf.random.normal(shape=(n_samples, latent_dim))
return model.decoder(randoms)
plot_yieldCurves(sample(reconstructed_model, 1, 10))
tf.keras.utils.plot_model(
vae_model.functional_model(),
to_file="model.png",
show_shapes=True,
show_dtype=False,
show_layer_names=False,
rankdir="TB",
expand_nested=False,
dpi=96,
layer_range=None,
show_layer_activations=False,
)
###Output
_____no_output_____
###Markdown
Conditional VAEWe extend the VAE by adding external conditions. This follows the ideas presented in [GitHub:MarketSimulator](https://github.com/imanolperez/market_simulator).In our yield curve example the external condition is *time-to-maturity*. That is, instead of a yield curve as vector, we now learn a yield curve functions.
###Code
class ConditionalVariationalAutoencoder(tf.keras.Model):
"""Conditional variational autoencoder."""
def __init__(self, input_dim, hidden_dim, latent_dim, output_dim, alpha=0.01):
super().__init__()
self.input_dim = input_dim
self.hidden_dim = hidden_dim
self.latent_dim = latent_dim
self.output_dim = output_dim
self.alpha = alpha
#
lrelu = tf.keras.layers.LeakyReLU(alpha=0.3) # functor for activation function
#
self.encoder = tf.keras.Sequential( [
tf.keras.layers.InputLayer(input_shape=(self.input_dim)),
tf.keras.layers.Dense(self.hidden_dim, activation=lrelu),
tf.keras.layers.Dense(2 * self.latent_dim, activation=lrelu), # mu, logvar
] )
self.decoder = tf.keras.Sequential( [
tf.keras.layers.InputLayer(input_shape=(self.latent_dim + self.input_dim - self.output_dim)),
tf.keras.layers.Dense(self.hidden_dim, activation=lrelu),
tf.keras.layers.Dense(self.output_dim, activation=tf.keras.activations.linear),
] )
def encode(self, x, c):
x = tf.concat([x, c], axis=1)
mean, logvar = tf.split(self.encoder(x), num_or_size_splits=2, axis=1)
return mean, logvar
def reparameterize(self, mean, logvar):
eps = tf.random.normal(shape=tf.shape(mean))
return eps * tf.exp(logvar * .5) + mean
def decode(self, z, c):
z = tf.concat([z, c], axis=1)
return self.decoder(z)
def call(self, inputs, training=False):
assert isinstance(inputs, (list, tuple))
assert len(inputs)==2
x = inputs[0]
c = inputs[1]
mean, logvar = self.encode(x, c)
z = self.reparameterize(mean, logvar)
x_out = self.decode(z, c)
return tf.concat([x_out, mean, logvar], axis=1)
def lossfunction(self, y_true, y_pred, sample_weight=None):
y = tf.cast(y_true, tf.float32)
x_out = y_pred[:, : -2*self.latent_dim ]
mean = y_pred[:, -2*self.latent_dim : -self.latent_dim ]
logvar = y_pred[:, -self.latent_dim : ]
#
decoded_loss = tf.reduce_sum(tf.math.squared_difference(x_out, y), 1)
latent_loss = 0.5 * tf.reduce_sum(tf.exp(logvar) + tf.square(mean) - 1. - logvar, 1) # https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence#Multivariate_normal_distributions
loss = tf.reduce_mean((1 - self.alpha) * decoded_loss + self.alpha * latent_loss)
return loss
def sample(self, n_samples, c, randoms=None):
if randoms is None: # we do need to sample
randoms = tf.random.normal(shape=(n_samples, self.latent_dim))
# we need the Cartesian product of randoms and conditions
z_full = tf.concat([randoms for row in c], axis=0)
zero = np.zeros(shape=(randoms.shape[0], c.shape[1]))
c_full = tf.concat([ tf.cast(zero+row, tf.float32) for row in c], axis=0)
dec_outputs = self.decode(z_full, c_full)
#return tf.reshape(dec_outputs, shape=(randoms.shape[0],c.shape[0]))
return \
tf.transpose(tf.reshape(dec_outputs, shape=(c.shape[0],randoms.shape[0]))), \
tf.transpose(tf.reshape(c_full, shape=(c.shape[0],randoms.shape[0])))
###Output
_____no_output_____
###Markdown
For each element of your yield curves we specify the time-to-maturity.
###Code
condition = np.zeros(shape=yieldCurves.shape) + delta_T
condition[:2,:]
###Output
_____no_output_____
###Markdown
Our VAE accepts inputs as vectors. Since we want to model individual yield curve values, we need to flatten curve values and time-to-maturity values.
###Code
x = yieldCurves.flatten()
x.shape = (x.shape[0], 1)
print(x.shape)
c = condition.flatten()
c.shape = (c.shape[0], 1)
print(c.shape)
###Output
_____no_output_____
###Markdown
Our VAE model has two inputs: curve value and time-to-maturity. As output we only have one quantity: curve value. We also put a lot of emphasis on re-constructions. Thus $\alpha$ is very small.
###Code
cvae_model = ConditionalVariationalAutoencoder(input_dim=2, hidden_dim=yieldCurves.shape[1], latent_dim=1, output_dim=1, alpha=0.5*1e-4)
optimizer = tf.keras.optimizers.Adam(learning_rate=0.005)
cvae_model.compile(optimizer=optimizer, loss=vae_model.lossfunction)
###Output
_____no_output_____
###Markdown
For sample calculation we need to supply the condition (i.e. time-to-maturity) as a row of the condition matrix.
###Code
cvae_model.fit(x=(x,c), y=x, epochs=1000, callbacks=[TqdmCallback(verbose=0)], verbose=0)
#
cond = np.reshape(delta_T, (delta_T.shape[0],1))
yieldCurves_cvae, delta_T_s = cvae_model.sample(8, c=cond)
plot_yieldCurves(yieldCurves_cvae)
###Output
_____no_output_____
###Markdown
It seems, the fit is not as good as when we learn the full shape of the curves. We also verify that the data transformations worked out by inspecting the equally re-shaped condition.
###Code
delta_T_s[1]
###Output
_____no_output_____
###Markdown
We will use keras and tensorflow to implement VAE ⏭
###Code
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from keras import backend as K
###Output
_____no_output_____
###Markdown
**REPARAMETERIZATION TRICK:** This sampling uses mean and logarithmic variance and sample z by using random value from normal distribution. ⚓ Reparameterization sample was first introduced [Kingma and Welling, 2013](https://arxiv.org/pdf/1312.6114.pdf) The process also defined by [Gunderson](https://gregorygundersen.com/blog/2018/04/29/reparameterization/). ♋
###Code
class Sampling(layers.Layer):
"""Uses (z_mean, z_log_var) to sample z, the vector encoding a digit."""
def call(self, inputs):
z_mean, z_log_var = inputs
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = tf.keras.backend.random_normal(shape=(batch, dim))
return z_mean + tf.exp(0.5 * z_log_var) * epsilon
###Output
_____no_output_____
###Markdown
VAE Encoder ▶ ▶ ▶ ☕ Encoder create z_mean and z_variance, then sample z from this z_mean and z_variance using epsilon.
###Code
latent_dim = 2 # because of z_mean and z_log_variance
encoder_inputs = keras.Input(shape=(28, 28, 1))
x = layers.Conv2D(32, 3, activation="relu", strides=2, padding="same")(encoder_inputs)
x = layers.Conv2D(64, 3, activation="relu", strides=2, padding="same")(x)
conv_shape = K.int_shape(x) #Shape of conv to be provided to decoder
print(conv_shape)
x = layers.Flatten()(x)
x = layers.Dense(32, activation="relu")(x)
z_mean = layers.Dense(latent_dim, name="z_mean")(x)
z_log_var = layers.Dense(latent_dim, name="z_log_var")(x)
z = Sampling()([z_mean, z_log_var])
encoder = keras.Model(encoder_inputs, [z_mean, z_log_var, z], name="encoder")
encoder.summary()
###Output
(None, 7, 7, 64)
Model: "encoder"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_6 (InputLayer) [(None, 28, 28, 1)] 0 []
conv2d_6 (Conv2D) (None, 14, 14, 32) 320 ['input_6[0][0]']
conv2d_7 (Conv2D) (None, 7, 7, 64) 18496 ['conv2d_6[0][0]']
flatten_2 (Flatten) (None, 3136) 0 ['conv2d_7[0][0]']
dense_4 (Dense) (None, 32) 100384 ['flatten_2[0][0]']
z_mean (Dense) (None, 2) 66 ['dense_4[0][0]']
z_log_var (Dense) (None, 2) 66 ['dense_4[0][0]']
sampling_2 (Sampling) (None, 2) 0 ['z_mean[0][0]',
'z_log_var[0][0]']
==================================================================================================
Total params: 119,332
Trainable params: 119,332
Non-trainable params: 0
__________________________________________________________________________________________________
###Markdown
VAE Decoder ◀ ◀ ◀☁ The tied architecture (reverse architecture from encoder to decoder) is preferred in AE. There is an [explanation](https://https://stats.stackexchange.com/questions/419684/why-is-the-autoencoder-decoder-usually-the-reverse-architecture-as-the-encoder) about it.---
###Code
latent_inputs = keras.Input(shape=(latent_dim,))
x = layers.Dense(conv_shape[1] * conv_shape[2] * conv_shape[3], activation="relu")(latent_inputs) # 7x7x64 shape
x = layers.Reshape((conv_shape[1],conv_shape[2], conv_shape[3]))(x)
x = layers.Conv2DTranspose(64, 3, activation="relu", strides=2, padding="same")(x)
x = layers.Conv2DTranspose(32, 3, activation="relu", strides=2, padding="same")(x)
decoder_outputs = layers.Conv2DTranspose(1, 3, activation="sigmoid", padding="same")(x)
decoder = keras.Model(latent_inputs, decoder_outputs, name="decoder")
decoder.summary()
###Output
Model: "decoder"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_7 (InputLayer) [(None, 2)] 0
dense_5 (Dense) (None, 3136) 9408
reshape_2 (Reshape) (None, 7, 7, 64) 0
conv2d_transpose_6 (Conv2DT (None, 14, 14, 64) 36928
ranspose)
conv2d_transpose_7 (Conv2DT (None, 28, 28, 32) 18464
ranspose)
conv2d_transpose_8 (Conv2DT (None, 28, 28, 1) 289
ranspose)
=================================================================
Total params: 65,089
Trainable params: 65,089
Non-trainable params: 0
_________________________________________________________________
###Markdown
VAE MODEL ✅
###Code
class VAE(keras.Model):
def __init__(self, encoder, decoder, **kwargs):
super(VAE, self).__init__(**kwargs)
self.encoder = encoder
self.decoder = decoder
self.total_loss_tracker = keras.metrics.Mean(name="total_loss")
self.reconstruction_loss_tracker = keras.metrics.Mean(
name="reconstruction_loss"
)
self.kl_loss_tracker = keras.metrics.Mean(name="kl_loss")
@property
def metrics(self):
return [
self.total_loss_tracker,
self.reconstruction_loss_tracker,
self.kl_loss_tracker,
]
def train_step(self, data):
with tf.GradientTape() as tape:
z_mean, z_log_var, z = self.encoder(data)
reconstruction = self.decoder(z)
reconstruction_loss = tf.reduce_mean(
tf.reduce_sum(
keras.losses.binary_crossentropy(data, reconstruction), axis=(1, 2)
)
)
kl_loss = -0.5 * (1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var))
kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1))
total_loss = reconstruction_loss + kl_loss
grads = tape.gradient(total_loss, self.trainable_weights)
self.optimizer.apply_gradients(zip(grads, self.trainable_weights))
self.total_loss_tracker.update_state(total_loss)
self.reconstruction_loss_tracker.update_state(reconstruction_loss)
self.kl_loss_tracker.update_state(kl_loss)
return {
"loss": self.total_loss_tracker.result(),
"reconstruction_loss": self.reconstruction_loss_tracker.result(),
"kl_loss": self.kl_loss_tracker.result(),
}
from google.colab import drive
drive.mount('/content/gdrive')
###Output
Mounted at /content/gdrive
###Markdown
⛹ If you want to run it on your desktop, you can download [data](https://https://www.kaggle.com/nikbearbrown/tmnist-alphabet-94-characters) and read from same directory with this ipynb file
###Code
import pandas as pd
df = pd.read_csv('gdrive/My Drive/DeepLearning/94_character_TMNIST.csv')
#df = pd.read_csv('94_character_TMNIST.csv')
print(df.shape)
X = df.drop(columns={'names','labels'})
X_images = X.values.reshape(-1,28,28)
X_images = np.expand_dims(X_images, -1).astype("float32") / 255
###Output
_____no_output_____
###Markdown
⚡ I tried different batch size(32,64,128,256) to train VAE model, 128 gives better result than others.
###Code
vae = VAE(encoder, decoder)
vae.compile(optimizer=keras.optimizers.Adam())
vae.fit(X_images, epochs=10, batch_size=128)
###Output
_____no_output_____
###Markdown
⛳ This plot latent space plot image between **[scale_x_left , scale_x_right]** and **[scale_y_bottom, scale_y_top]**
###Code
import matplotlib.pyplot as plt
def plot_latent_space(vae, n=8, figsize=12):
# display a n*n 2D manifold of digits
digit_size = 28
scale_x_left = 1 # If we change the range, t generate different image.
scale_x_right = 4
scale_y_bottom = 0
scale_y_top = 1
figure = np.zeros((digit_size * n, digit_size * n))
# If we want to see different x and y range we can change values in grid_x and gird_y. I trid x= [-3,-2] and y = [-3,-1] values and m labeled imaged are generated.
grid_x = np.linspace(scale_x_left, scale_x_right, n) # -3, -2
grid_y = np.linspace(scale_y_bottom, scale_y_top, n)[::-1] # -3, -1
for i, yi in enumerate(grid_y):
for j, xi in enumerate(grid_x):
z_sample = np.array([[xi, yi]])
x_decoded = vae.decoder.predict(z_sample)
digit = x_decoded[0].reshape(digit_size, digit_size)
figure[
i * digit_size : (i + 1) * digit_size,
j * digit_size : (j + 1) * digit_size,
] = digit
plt.figure(figsize=(figsize, figsize))
start_range = digit_size // 2
end_range = n * digit_size + start_range
pixel_range = np.arange(start_range, end_range, digit_size)
sample_range_x = np.round(grid_x, 1)
sample_range_y = np.round(grid_y, 1)
plt.xticks(pixel_range, sample_range_x)
plt.yticks(pixel_range, sample_range_y)
plt.xlabel("z[0]")
plt.ylabel("z[1]")
plt.imshow(figure, cmap="Greys_r")
plt.show()
plot_latent_space(vae)
###Output
_____no_output_____
###Markdown
♑ When we plot all the Training data with labels, we can see ***z_mean*** values of the data. ❎ If we sample with this ***z_mean*** value, we can acquire similar image from this latent space. ⭕ Because two points are close to each other in latent space means they are looking similar(variant of this label).
###Code
def plot_label_clusters(vae, data, labels):
# display a 2D plot of the digit classes in the latent space
z_mean, _, _ = vae.encoder.predict(data)
plt.figure(figsize=(12, 12))
plt.scatter(z_mean[:, 0], z_mean[:, 1], c=labels)
plt.colorbar()
plt.xlabel("z[0]")
plt.ylabel("z[1]")
plt.show()
y = df[['labels']]
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
y_label = le.fit_transform(y)
plot_label_clusters(vae, X_images, y_label)
###Output
/usr/local/lib/python3.7/dist-packages/sklearn/preprocessing/_label.py:115: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)
###Markdown
⛪ Visualize one image
###Code
#Single decoded image with random input latent vector (of size 1x2)
#Latent space range is about -5 to 5 so pick random values within this range
sample_vector = np.array([[3,0.5]])
decoded_example = decoder.predict(sample_vector)
decoded_example_reshaped = decoded_example.reshape(28, 28)
plt.imshow(decoded_example_reshaped)
###Output
_____no_output_____ |
solutions/mid1/submissions/liangtimothy_167874_6241815_MIDTERM1.ipynb | ###Markdown
1.1False. MV optimizations only optimizes the whole portfolio,if any asset has a high positive or negative Sharpe ratio, it is normal for the optimization process to put more weight on those assets, however, if there is assets, say risk free assets, and has a low correlation of other assets, those assets will also be in the portfolio to diversify the risk. 1.2 False. Since the LETF tracks has tracking errors, and it is cumulative, the total track error will increase in volatility over timr. Hold a long-term LETF will bear a higher risk. 1.3 Mean returns may be estimated inaccurately, so we may want to include alpha toelimate means in order to focus on explaining variation. So yes, we should includean intercept.1.4In sample: HDG has beta around ~0.35. HDG has a Treynor ratio of 0.07. Two of these assets also have negative information ratios. When looking at HFRI, the beta is only a little higher than the HDG betas - and same with the Treynor ratio. While the HFRI information ratio has a significantly lower absolute value than any of the HDG, it is negative as well. We can conclude that the most notable features of HFRI are captured by HDG because there are no huge jumps in numbers or sign changes with these statistics.Out of sample:The out-of-sample replication also performs pretty well with respect to the target according to our previous calculation.1.5It could be that this hedge fund didn't regress on Merrill-Lynch style factors or very few of them, thus leaving the variations of those parts entirely to alpha, which would make the alpha look higher and make themselves look more skilled.
###Code
#imports and useful functions
import pandas as pd
import numpy as np
import statsmodels.api as sm
import statsmodels.formula.api as smf
pd.options.display.float_format = "{:,.4f}".format
import statsmodels.formula.api as smf
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
from statsmodels.regression.rolling import RollingOLS
from sklearn.linear_model import LinearRegression
import warnings
warnings.filterwarnings("ignore")
#imports and useful functions
import pandas as pd
import numpy as np
import statsmodels.api as sm
import statsmodels.formula.api as smf
pd.options.display.float_format = "{:,.4f}".format
import statsmodels.formula.api as smf
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
from statsmodels.regression.rolling import RollingOLS
from sklearn.linear_model import LinearRegression
import warnings
warnings.filterwarnings("ignore")
def performanceMetrics(returns,annualization=12):
metrics = pd.DataFrame(index=returns.columns)
metrics['Mean'] = returns.mean() * annualization
metrics['Vol'] = returns.std() * np.sqrt(annualization)
metrics['Sharpe'] = (returns.mean() / returns.std()) * np.sqrt(annualization)
metrics['Min'] = returns.min()
metrics['Max'] = returns.max()
return metrics
def portfolio_stats(omega, mu_tilde, Sigma, annualize_fac):
mean = (mu_tilde @ omega) * annualize_fac
vol = np.sqrt(omega @ Sigma @ omega) * np.sqrt(annualize_fac)
sharpe_ratio = mean / vol
return round(pd.DataFrame(data = [mean, vol, sharpe_ratio],
index = ['Mean', 'Volatility', 'Sharpe'],
columns = ['Portfolio Stats']), 4)
def tangency_weights(returns,dropna=True,scale_cov=1):
if dropna:
returns = returns.dropna()
covmat_full = returns.cov()
covmat_diag = np.diag(np.diag(covmat_full))
covmat = scale_cov * covmat_full + (1-scale_cov) * covmat_diag
weights = np.linalg.solve(covmat,returns.mean())
weights = weights / weights.sum()
return pd.DataFrame(weights, index=returns.columns)
def display_correlation(df,list_maxmin=True):
corrmat = df.corr()
corrmat[corrmat==1] = None
sns.heatmap(corrmat)
if list_maxmin:
corr_rank = corrmat.unstack().sort_values().dropna()
pair_max = corr_rank.index[-1]
pair_min = corr_rank.index[0]
print(f'MIN Correlation pair is {pair_min}')
print(f'MAX Correlation pair is {pair_max}')
def maximumDrawdown(returns):
cum_returns = (1 + returns).cumprod()
rolling_max = cum_returns.cummax()
drawdown = (cum_returns - rolling_max) / rolling_max
max_drawdown = drawdown.min()
end_date = drawdown.idxmin()
summary = pd.DataFrame({'Max Drawdown': max_drawdown, 'Bottom': end_date})
for col in drawdown:
summary.loc[col,'Peak'] = (rolling_max.loc[:end_date[col],col]).idxmax()
recovery = (drawdown.loc[end_date[col]:,col])
try:
summary.loc[col,'Recover'] = pd.to_datetime(recovery[recovery >= 0].index[0])
except:
summary.loc[col,'Recover'] = pd.to_datetime(None)
summary['Peak'] = pd.to_datetime(summary['Peak'])
try:
summary['Duration (to Recover)'] = (summary['Recover'] - summary['Peak'])
except:
summary['Duration (to Recover)'] = None
summary = summary[['Max Drawdown','Peak','Bottom','Recover','Duration (to Recover)']]
return summary
def tailMetrics(returns, quantile=.05, relative=False, mdd=True):
metrics = pd.DataFrame(index=returns.columns)
metrics['Skewness'] = returns.skew()
metrics['Kurtosis'] = returns.kurtosis()
VaR = returns.quantile(quantile)
CVaR = (returns[returns < returns.quantile(quantile)]).mean()
if relative:
VaR = (VaR - returns.mean())/returns.std()
CVaR = (CVaR - returns.mean())/returns.std()
metrics[f'VaR ({quantile})'] = VaR
metrics[f'CVaR ({quantile})'] = CVaR
if mdd:
mdd_stats = maximumDrawdown(returns)
metrics = metrics.join(mdd_stats)
if relative:
metrics['Max Drawdown'] = (metrics['Max Drawdown'] - returns.mean())/returns.std()
return metrics
# round(static_model.rsquared,4)
# round(static_model.resid.std() * np.sqrt(12),4)
# model = RollingOLS(y,X,window=60)
# rolling_betas = model.fit().params.copy()
# q1_df.style.set_caption('Solution Table 1: mean, volatility and Sharpe ratio of each asset (Annualized)')
# replication = hf_data[['HFRIFWI Index']].copy()
# replication['Static-IS-Int'] = static_model.fittedvalues
# replication['Rolling-IS-Int'] = rep_IS
# replication['Rolling-OOS-Int'] = rep_OOS
# replication[['Rolling-OOS-Int','HFRIFWI Index']].plot()
# p = scipy.stats.norm.cdf(x)
# p = scipy.stats.norm.ppf(x)
# interval=stats.t.interval(0.95,len(x)-1,mean,std)
#hf_data = pd.read_excel('proshares_analysis_data.xlsx', sheet_name = 'hedge_fund_series')
#hf_data = hf_data.set_index('date')
factor_data = pd.read_excel('../proshares_analysis_data.xlsx', sheet_name = 'merrill_factors')
factor_data = factor_data.set_index('date')
factor_data
#2.1
rf = factor_data['USGG3M Index']
df_risky_assets = factor_data[factor_data.columns.difference(['USGG3M Index'])]
df_risky_assets = df_risky_assets.sub(rf,axis = 0)
tw = tangency_weights(df_risky_assets,dropna=True,scale_cov=1)
tw
#2.2
# grading purpose
def compute_tangency(df_tilde, diagonalize_Sigma=False):
Sigma = df_tilde.cov()
# N is the number of assets
N = Sigma.shape[0]
Sigma_adj = Sigma.copy()
if diagonalize_Sigma:
Sigma_adj.loc[:,:] = np.diag(np.diag(Sigma_adj))
mu_tilde = df_tilde.mean()
Sigma_inv = np.linalg.inv(Sigma_adj)
weights = Sigma_inv @ mu_tilde / (np.ones(N) @ Sigma_inv @ mu_tilde)
# For convenience, I'll wrap the solution back into a pandas.Series object.
omega_tangency = pd.Series(weights, index=mu_tilde.index)
return omega_tangency, mu_tilde, Sigma_adj
def target_mv_portfolio(df_tilde, target_return=0.01, diagonalize_Sigma=False):
omega_tangency, mu_tilde, Sigma = compute_tangency(df_tilde, diagonalize_Sigma=diagonalize_Sigma)
Sigma_adj = Sigma.copy()
if diagonalize_Sigma:
Sigma_adj.loc[:,:] = np.diag(np.diag(Sigma_adj))
Sigma_inv = np.linalg.inv(Sigma_adj)
N = Sigma_adj.shape[0]
delta_tilde = ((np.ones(N) @ Sigma_inv @ mu_tilde)/(mu_tilde @ Sigma_inv @ mu_tilde)) * target_return
omega_star = delta_tilde * omega_tangency
return omega_star, mu_tilde, Sigma_adj
omega_star, mu_tilde, Sigma = target_mv_portfolio(df_risky_assets,target_return=0.02)
omega_star_df = omega_star.to_frame('MV Portfolio Weights')
#omega_star_df
omega_star
#2.2
def compute_tangency(df_tilde, diagonalize_Sigma=False):
Sigma = df_tilde.cov()
# N is the number of assets
N = Sigma.shape[0]
Sigma_adj = Sigma.copy()
if diagonalize_Sigma:
Sigma_adj.loc[:,:] = np.diag(np.diag(Sigma_adj))
mu_tilde = df_tilde.mean()
Sigma_inv = np.linalg.inv(Sigma_adj)
weights = Sigma_inv @ mu_tilde / (np.ones(N) @ Sigma_inv @ mu_tilde)
# For convenience, I'll wrap the solution back into a pandas.Series object.
omega_tangency = pd.Series(weights, index=mu_tilde.index)
return omega_tangency, mu_tilde, Sigma_adj
def target_mv_portfolio(df_tilde, target_return=0.01, diagonalize_Sigma=False):
omega_tangency, mu_tilde, Sigma = compute_tangency(df_tilde, diagonalize_Sigma=diagonalize_Sigma)
Sigma_adj = Sigma.copy()
if diagonalize_Sigma:
Sigma_adj.loc[:,:] = np.diag(np.diag(Sigma_adj))
Sigma_inv = np.linalg.inv(Sigma_adj)
N = Sigma_adj.shape[0]
delta_tilde = ((np.ones(N) @ Sigma_inv @ mu_tilde)/(mu_tilde @ Sigma_inv @ mu_tilde)) * target_return
omega_star = delta_tilde * omega_tangency
return omega_star, mu_tilde, Sigma_adj
omega_star, mu_tilde, Sigma = target_mv_portfolio(df_risky_assets,target_return=0.12)
omega_star_df = omega_star.to_frame('MV Portfolio Weights')
#omega_star_df
omega_star
###Output
_____no_output_____
###Markdown
This optimal portfolio invests in risk free rate. since the summation of weights of all 5 risky assets is not 1.
###Code
#2.3
# stats2_3 = performanceMetrics(df_risky_assets)
res2_3 = portfolio_stats(omega_star, mu_tilde,Sigma, 12)
res2_3
#2.4
omega_star, mu_tilde, Sigma = target_mv_portfolio(df_risky_assets.loc['2018':],target_return=0.12)
returns = ((omega_star * df_risky_assets.loc['2019':])+1).cumprod()
print('overall return in 2019-2021:')
print(returns.iloc[-1,:])
print('stats in 2019-2021:')
performanceMetrics(returns,annualization=12)
#2.4
omega_star, mu_tilde, Sigma = target_mv_portfolio(df_risky_assets.loc['2018':],target_return=0.02)
returns = ((omega_star * df_risky_assets.loc['2019':])+1).cumprod()
print('overall return in 2019-2021:')
print(returns.iloc[-1,:])
print('stats in 2019-2021:')
performanceMetrics(returns,annualization=12)
###Output
overall return in 2019-2021:
EEM US Equity 1.0180
EFA US Equity 0.5575
EUO US Equity 1.0065
IWM US Equity 0.7815
SPY US Equity 3.2454
Name: 2021-09-30 00:00:00, dtype: float64
stats in 2019-2021:
###Markdown
2.5 Yes. The commodity futures are traded in leverage, and they have more dramatic volatility than the 5 risky assts. So the out-of-sample performance might decay quicker than the 5 risky assets.
###Code
#3.1
y = df_risky_assets['EEM US Equity']
X = df_risky_assets['SPY US Equity']
static_model = sm.OLS(y,X).fit()
print(f'for every dollar invested in EEM, I would invest in {static_model.params[0]} SPY')
#3.2
pos = pd.DataFrame([1,-0.9256],columns = {'EEM US Equity','SPY US Equity'}).T
hedged_portfolio = portfolio_stats(pos, df_risky_assets[['EEM US Equity','SPY US Equity']].mean().T
,df_risky_assets[['EEM US Equity','SPY US Equity']].std().T, 12)
hedged_portfolio
###Output
_____no_output_____
###Markdown
3.3 No, since we had a hedged position without intercept, we only replicated and tracked the volatility of track errors, which rule out the mean that SPY explains through beta parameters, the mean will be lower than not hedged.
###Code
#3.4
y = df_risky_assets['EEM US Equity']
X = df_risky_assets[['SPY US Equity','IWM US Equity']]
static_model = sm.OLS(y,X).fit()
static_model.params
###Output
_____no_output_____
###Markdown
3.5 When we incorporate IWM, we add more risk in changing the positions od the portfolio, so it's harder than just hedge with SPY.
###Code
#4.1
total_returns = factor_data[['SPY US Equity','EFA US Equity']]
perf = performanceMetrics(total_returns,annualization=12)
perf
#4.2
import scipy
roll_vol = total_returns[['EFA US Equity']].expanding(60).std().dropna()
Var_estimate = roll_vol.rolling(1).apply(lambda x:x*scipy.stats.norm.ppf(0.01)) #*scipy.stats.norm.ppf(0.01)
Var_estimate
###Output
_____no_output_____ |
Numpy-specific_help_functions_Solutions.ipynb | ###Markdown
Q1. Search for docstrings of the numpy functions on linear algebra.
###Code
np.lookfor('linear algebra')
###Output
Search results for 'linear algebra'
-----------------------------------
numpy.linalg.solve
Solve a linear matrix equation, or system of linear scalar equations.
numpy.poly
Find the coefficients of a polynomial with the given sequence of roots.
numpy.restoredot
Restore `dot`, `vdot`, and `innerproduct` to the default non-BLAS
numpy.linalg.eig
Compute the eigenvalues and right eigenvectors of a square array.
numpy.linalg.cond
Compute the condition number of a matrix.
numpy.linalg.eigh
Return the eigenvalues and eigenvectors of a Hermitian or symmetric matrix.
numpy.linalg.pinv
Compute the (Moore-Penrose) pseudo-inverse of a matrix.
numpy.linalg.LinAlgError
Generic Python-exception-derived object raised by linalg functions.
###Markdown
Q2. Get help information for numpy dot function.
###Code
np.info(np.dot)
###Output
dot(a, b, out=None)
Dot product of two arrays.
For 2-D arrays it is equivalent to matrix multiplication, and for 1-D
arrays to inner product of vectors (without complex conjugation). For
N dimensions it is a sum product over the last axis of `a` and
the second-to-last of `b`::
dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])
Parameters
----------
a : array_like
First argument.
b : array_like
Second argument.
out : ndarray, optional
Output argument. This must have the exact kind that would be returned
if it was not used. In particular, it must have the right type, must be
C-contiguous, and its dtype must be the dtype that would be returned
for `dot(a,b)`. This is a performance feature. Therefore, if these
conditions are not met, an exception is raised, instead of attempting
to be flexible.
Returns
-------
output : ndarray
Returns the dot product of `a` and `b`. If `a` and `b` are both
scalars or both 1-D arrays then a scalar is returned; otherwise
an array is returned.
If `out` is given, then it is returned.
Raises
------
ValueError
If the last dimension of `a` is not the same size as
the second-to-last dimension of `b`.
See Also
--------
vdot : Complex-conjugating dot product.
tensordot : Sum products over arbitrary axes.
einsum : Einstein summation convention.
matmul : '@' operator as method with out parameter.
Examples
--------
>>> np.dot(3, 4)
12
Neither argument is complex-conjugated:
>>> np.dot([2j, 3j], [2j, 3j])
(-13+0j)
For 2-D arrays it is the matrix product:
>>> a = [[1, 0], [0, 1]]
>>> b = [[4, 1], [2, 2]]
>>> np.dot(a, b)
array([[4, 1],
[2, 2]])
>>> a = np.arange(3*4*5*6).reshape((3,4,5,6))
>>> b = np.arange(3*4*5*6)[::-1].reshape((5,4,6,3))
>>> np.dot(a, b)[2,3,2,1,2,2]
499128
>>> sum(a[2,3,2,:] * b[1,2,:,2])
499128
|
content/_build/html/_sources/04_writing_our_own_container_types/writing_our_own_container_types.ipynb | ###Markdown
<img src="https://colab.research.google.com/assets/colab-badge.svg" title="Open this file in Google Colab" alt="Colab"/> Creating new collectionsWe've seen collections like lists, strings and tuples that allow indexed access `mylist[0]`And we've seen collections like dict and set that allow keyed access`menu_prices_dict['hamburger'] = 59`In this chapter we will learn how the magic behind collections and access worksand how to create our own containers or customize existing containers Note:In this lesson we're going to rely heavily on decorators, duck-typing, protocols, and ABCs - which we've covered in lessons 02 and 03 Iterator protocol Lets start with one of the simplest python protocols: the __iterator__ protocol. [What are iterators in Python?][1]Iterators are everywhere in Python. They are elegantly implemented within for loops, comprehensions, generators etc. but hidden in plain sight.Iterator in Python is simply an object that can be iterated upon. An object which will return data, one element at a time.Technically speaking, Python iterator object must implement two special methods, `__iter__()` and `__next__()`, collectively called the iterator protocol.An object is called iterable if we can get an iterator from it. Most of built-in containers in Python like: list, tuple, string etc. are iterables.The iter() function (which in turn calls the `__iter__()` method) returns an iterator from them. [Iterator Types][2]Python supports a concept of iteration over containers. This is implemented using two distinct methods; these are used to allow user-defined classes to support iteration. Sequences, described below in more detail, always support the iteration methods.One method needs to be defined for container objects to provide iteration support:`container.__iter__()`> Return an iterator object. The object is required to support the iterator protocol described below. If a container supports different types of iteration, additional methods can be provided to specifically request iterators for those iteration types. (An example of an object supporting multiple forms of iteration would be a tree structure which supports both breadth-first and depth-first traversal.) This method corresponds to the tp_iter slot of the type structure for Python objects in the Python/C API.The iterator objects themselves are required to support the following two methods, which together form the iterator protocol:`iterator.__iter__()`> Return the iterator object itself. This is required to allow both containers and iterators to be used with the for and in statements. This method corresponds to the tp_iter slot of the type structure for Python objects in the Python/C API.`iterator.__next__()`> Return the next item from the container. If there are no further items, raise the StopIteration exception. This method corresponds to the tp_iternext slot of the type structure for Python objects in the Python/C API.Python defines several iterator objects to support iteration over general and specific sequence types, dictionaries, and other more specialized forms. The specific types are not important beyond their implementation of the iterator protocol.Once an iterator’s __next__() method raises StopIteration, it must continue to do so on subsequent calls. Implementations that do not obey this property are deemed broken.[1]: https://www.programiz.com/python-programming/iterator[2]: https://docs.python.org/3.7/library/stdtypes.htmliterator-types ExampleBelow we show an example class that implements the iterator protocol
###Code
class VersionedObject :
"""
VersionedObject is a type that remembers all the past values it held
and can return the history of its values with a for loop
"""
def __init__(self, value=None):
self.__values = [ value ]
def update(self, value):
self.__values.append(value)
def latest(self):
return self.__values[-1]
def __iter__(self):
return self.Iterator(self.__values)
class Iterator:
def __init__(self, values):
self.__index = 0
self.__values = values
def __next__(self):
# Return the next item from the container.
# If there are no further items, raise the StopIteration exception
if self.__index is None or len(self.__values) <= self.__index:
# Once an iterator’s next() method raises StopIteration, it must continue to do so
self.__index = None
raise StopIteration()
value = self.__values[self.__index]
self.__index += 1
return value
def __iter__(self):
# Return the iterator object itself.
# This is required to allow both containers and iterators to be used with the for and in statements
return self #
x = VersionedObject([1])
x.update(2)
x.update("third version")
x.update(4)
for older_value in x: # calls the __iter__ method on x
print(older_value) # calls the __next__ method on the iterator
"""
Lets simplify the code for VersionObject, by using the fact that lists []
also support the iterator protocol themselves
"""
class VersionedObject_2 :
def __init__(self, value=None):
self.__values = [ value ]
def update(self, value):
self.__values.append(value)
def latest(self):
return self.__values[-1]
def __iter__(self):
return iter(self.__values) # calls self.__values iterator
x = VersionedObject_2([1])
x.update(2)
x.update("third version")
x.update(4)
for older_value in x: # calls the __iter__ method on x
print(older_value) # calls the __next__ method on the iterator
###Output
[1]
2
third version
4
###Markdown
[Sequence protocol](https://docs.python.org/3.7/library/stdtypes.htmliterator-types)There are three basic sequence types: lists, tuples, and range objects.Sequences support the following operations:```Operation Resultx in s True if an item of s is equal to x, else Falsex not in s False if an item of s is equal to x, else Trues + t the concatenation of s and ts * n or n * s equivalent to adding s to itself n timess[i] ith item of s, origin 0s[i:j] slice of s from i to js[i:j:k] slice of s from i to j with step klen(s) length of smin(s) smallest item of smax(s) largest item of ss.index(x[, i[, j]]) index of the first occurrence of x in s (at or after index i and before index j)s.count(x) total number of occurrences of x in s```This is a rather long list of operations ... also it doesn't tell us which methods to implement and how ... to help us with this difficult task, python provides an abstract base class(ABC) `collections.abc.Sequence`that implements the most of the sequence protocol and only asks us to implement two abstract function: `__len__` and `__getitem__`> * if you're interested in the details of how the Sequence class implements other functions such as `index`, `count` or `__contains__` just by using `__len__` and `__getitem__` see the `_collections_abc.py` module in the python standard library Lets see `collections.abc.Sequence` in action by implementing an incredibly simple sequence that we're already familiar with - range.
###Code
import collections.abc
import math
class MyRange(collections.abc.Sequence):
def __init__(self, start, stop, step=1):
self.__start = start
self.__stop = stop
self.__step = step
self.__n = max(0, math.ceil((stop-start) / step))
super().__init__()
def __len__(self):
return self.__n
def __getitem__(self, offset):
if isinstance(offset, slice):
return itertools.islice(self, offset.start, offset.stop, offset.step)
if self.__n <= offset:
raise IndexError('range object index out of range')
return self.__start + offset * self.__step
def __repr__(self):
return f"{type(self).__name__}({self.__start},{self.__stop},{self.__step})"
# Let's use MyRange
range5 = MyRange(0, 5)
# convert to list
print(list(range5)) # [0, 1, 2, 3, 4]
# use indexing
print(range5[0], range5[1], range5[2]) # 0 1 2
# use 'in' keyword
print(3 in range5) # true
print(100 in range5) # false
# min/max/count
print(min(range5)) # 0
print(max(range5)) # 4
print(range5.count(4)) # 1
###Output
[0, 1, 2, 3, 4]
0 1 2
True
False
0
4
1
###Markdown
[Mapping protocol](https://docs.python.org/3.7/library/stdtypes.htmldict)A mapping object maps hashable values to arbitrary objects. Mappings are mutable objects. There is currently only one standard mapping type, the dictionary. A dictionary’s keys are _almost_ arbitrary values. key have to be __hashable__, which means that keys must be immutable. that is, objects containing lists, dictionaries or other mutable types may not be used as keys.mappings should support the following operations:```len(d) Return the number of items in the dictionary d.d[key] Return the item of d with key key. Raises a KeyError if key is not in the map. d[key] = value Set d[key] to value.del d[key] Remove d[key] from d. Raises a KeyError if key is not in the map.key in d Return True if d has a key key, else False.key not in d Equivalent to not key in d.iter(d) Return an iterator over the keys of the dictionary. This is a shortcut for iter(d.keys()).clear() Remove all items from the dictionary.copy() Return a shallow copy of the dictionary.@classmethod fromkeys(iterable[, value]) Create a new dictionary with keys from iterable and values set to value.get(key[, default]) Return the value for key if key is in the dictionary, else default. If default is not given, it defaults to None, so that this method never raises a KeyError.items() Return a new view of the dictionary’s items ((key, value) pairs). See the documentation of view objects.keys() Return a new view of the dictionary’s keys. See the documentation of view objects.pop(key[, default]) If key is in the dictionary, remove it and return its value, else return default. If default is not given and key is not in the dictionary, a KeyError is raised.popitem() Remove and return a (key, value) pair from the dictionary. Pairs are returned in LIFO order.setdefault(key[, default]) If key is in the dictionary, return its value. If not, insert key with a value of default and return default. default defaults to None.update([other]) Update the dictionary with the key/value pairs from other, overwriting existing keys. Return None.values() Return a new view of the dictionary’s values. ``` 😱Yes, that's quite a big number of things. again, the `collections` module has a class called `MutableMapping` which takes care of most things and only asks that we implement `__getitem__, __setitem__, __delitem__, __iter__, and __len__` functions ExampleLets use `MutableMapping` to create a new type of dictionary, one that sends a message to observers whenever it changes
###Code
from collections.abc import MutableMapping
class Observable: # from ex 02
def __init__(self):
self.handlers = []
def register(self, callable_):
self.handlers.append(callable_)
def notify(self, event, *args, **kwargs):
for handler in self.handlers:
handler(event, *args, **kwargs)
class ObservableMapping(MutableMapping):
def __init__(self, dict_ = {}, dicttype = None):
dicttype = dicttype or type(dict_)
self.data = dicttype(dict_)
self.events = Observable()
def __setitem__(self, key, value):
self.events.notify('set', self, key, value)
return self.data.__setitem__(key, value)
def __delitem__(self, key):
self.events.notify('del', self, key)
return self.data.__delitem__(key)
def __getitem__(self, key):
return self.data.__getitem__(key)
def __iter__(self):
return self.data.__iter__()
def __len__(self):
return self.data.__len__()
def handler(event, obj, key, *args, **kwargs):
print(event, repr(key), '-->>', *args, **kwargs)
d = ObservableMapping()
d.events.register(handler)
d['name'] = 'Sir Launcelot of Camelot'
d['favorite color'] = 'blue'
d.popitem()
###Output
set 'name' -->> Sir Launcelot of Camelot
set 'favorite color' -->> blue
del 'name' -->>
|
board/RFSoC2x2/notebooks/common/rfsoc2x2_pmbus.ipynb | ###Markdown
PMBus on the RFSoC2x2---- Aim/s* Explore the monitoring power rails using PMBus through PYNQ. Reference* [PYNQ docs](https://pynq.readthedocs.io/en/latest/index.html) Last revised* 27Jan21 * Initial revision---- The board has some support for monitoring power rails on the board using PMBus.PYNQ exposes these rails through the `get_rails` function that returns a dictionaryof all of the rails available to be monitored.
###Code
import pynq
rails = pynq.get_rails()
rails
###Output
_____no_output_____
###Markdown
As can be seen, the keys of the dictionary are the names of the voltage railswhile the values are `Rail` objects which contain three sensors for the voltage, current and power.To see how power changes under CPU load we can use the `DataRecorder` class.For this example we are going to look at the `0V85` rail listed aboveas we load one of the CPU cores in Python.
###Code
recorder = pynq.DataRecorder(rails["0V85"].power)
###Output
_____no_output_____
###Markdown
We can now use the recorder to monitor the applied sensor. For this example we'll sample the power every half second while sleepingand performing a dummy loop.
###Code
import time
with recorder.record(0.5):
time.sleep(5)
for _ in range(10000000):
pass
time.sleep(5)
###Output
_____no_output_____
###Markdown
The `DataRecorder` exposes the sensor data as a pandas dataframe.
###Code
recorder.frame
###Output
_____no_output_____
###Markdown
or by plotting the results using matplotlib
###Code
%matplotlib inline
recorder.frame["0V85_power"].plot()
###Output
_____no_output_____
###Markdown
We can get more information by using the `mark` function which will incrementthe invocation number without having to stop and start the recorder.
###Code
recorder.reset()
with recorder.record(0.5):
time.sleep(5)
recorder.mark()
for _ in range(10000000):
pass
recorder.mark()
time.sleep(5)
recorder.frame.plot(subplots=True)
###Output
_____no_output_____ |
source/examples/basics/gog/geom_map.ipynb | ###Markdown
geom_map()
###Code
import pandas as pd
from lets_plot import *
from lets_plot.geo_data import *
LetsPlot.setup_html()
df = pd.read_csv('https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/midwest.csv')
states = geocode('state', df.state.unique(), scope='US').get_boundaries(9)
ggplot() + geom_map(data=states, tooltips=layer_tooltips().line('@{found name}')) + theme(panel_grid='blank')
###Output
_____no_output_____
###Markdown
geom_map()
###Code
import pandas as pd
from lets_plot import *
from lets_plot.geo_data import *
LetsPlot.setup_html()
df = pd.read_csv('https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/midwest.csv')
states = geocode('state', df.state.unique(), scope='US').get_boundaries(9)
ggplot() + geom_map(data=states, tooltips=layer_tooltips().line('@{found name}')) + theme(panel_grid='blank')
###Output
_____no_output_____ |
jupyter_interactive_widgets/notebooks/reference_guides/.ipynb_checkpoints/guide-other-checkpoint.ipynb | ###Markdown
Layout and Styling of Jupyter widgetsThis notebook presents how to layout and style Jupyter interactive widgets to build rich and *reactive* widget-based applications. The `layout` attribute.Jupyter interactive widgets have a `layout` attribute exposing a number of CSS properties that impact how widgets are laid out. Exposed CSS propertiesThe following properties map to the values of the CSS properties of the same name (underscores being replaced with dashes), applied to the top DOM elements of the corresponding widget. Sizes- `height`- `width`- `max_height`- `max_width`- `min_height`- `min_width` Display- `visibility`- `display`- `overflow`- `overflow_x` (deprecated in `7.5`, use `overflow` instead)- `overflow_y` (deprecated in `7.5`, use `overflow` instead) Box model- `border` - `margin`- `padding` Positioning- `top`- `left`- `bottom`- `right` Flexbox- `order`- `flex_flow`- `align_items`- `flex`- `align_self`- `align_content`- `justify_content` Grid layout- `grid_auto_columns`- `grid_auto_flow`- `grid_auto_rows`- `grid_gap`- `grid_template`- `grid_row`- `grid_column` Shorthand CSS propertiesYou may have noticed that certain CSS properties such as `margin-[top/right/bottom/left]` seem to be missing. The same holds for `padding-[top/right/bottom/left]` etc.In fact, you can atomically specify `[top/right/bottom/left]` margins via the `margin` attribute alone by passing the string `'100px 150px 100px 80px'` for a respectively `top`, `right`, `bottom` and `left` margins of `100`, `150`, `100` and `80` pixels.Similarly, the `flex` attribute can hold values for `flex-grow`, `flex-shrink` and `flex-basis`. The `border` attribute is a shorthand property for `border-width`, `border-style (required)`, and `border-color`. Simple examples The following example shows how to resize a `Button` so that its views have a height of `80px` and a width of `50%` of the available space. It also includes an example of setting a CSS property that requires multiple values (a border, in thise case):
###Code
from ipywidgets import Button, Layout
b = Button(description='(50% width, 80px height) button',
layout=Layout(width='50%', height='80px', border='2px dotted blue'))
b
###Output
_____no_output_____
###Markdown
The `layout` property can be shared between multiple widgets and assigned directly.
###Code
Button(description='Another button with the same layout', layout=b.layout)
###Output
_____no_output_____
###Markdown
Description You may have noticed that long descriptions are truncated. This is because the description length is, by default, fixed.
###Code
from ipywidgets import IntSlider
IntSlider(description='A too long description')
###Output
_____no_output_____
###Markdown
If you need more flexibility to lay out widgets and descriptions, you can use Label widgets directly.
###Code
from ipywidgets import HBox, Label
HBox([Label('A too long description'), IntSlider()])
###Output
_____no_output_____
###Markdown
You can change the length of the description to fit the description text. However, this will make the widget itself shorter. You can change both by adjusting the description width and the widget width using the widget's style.
###Code
style = {'description_width': 'initial'}
IntSlider(description='A too long description', style=style)
###Output
_____no_output_____
###Markdown
Natural sizes, and arrangements using HBox and VBoxMost of the core-widgets have default heights and widths that tile well together. This allows simple layouts based on the `HBox` and `VBox` helper functions to align naturally:
###Code
from ipywidgets import Button, HBox, VBox
words = ['correct', 'horse', 'battery', 'staple']
items = [Button(description=w) for w in words]
left_box = VBox([items[0], items[1]])
right_box = VBox([items[2], items[3]])
HBox([left_box, right_box])
###Output
_____no_output_____
###Markdown
LaTeX Widgets such as sliders and text inputs have a description attribute that can render Latex Equations. The `Label` widget also renders Latex equations.
###Code
from ipywidgets import IntSlider, Label
IntSlider(description=r'\(\int_0^t f\)')
Label(value=r'\(e=mc^2\)')
###Output
_____no_output_____
###Markdown
Number formattingSliders have a readout field which can be formatted using Python's [Format Specification Mini-Language](https://docs.python.org/3/library/string.htmlformat-specification-mini-language). If the space available for the readout is too narrow for the string representation of the slider value, a different styling is applied to show that not all digits are visible. **Four buttons in a VBox. Items stretch to the maximum width, in a vertical box taking `50%` of the available space.**
###Code
from ipywidgets import Layout, Button, Box
items_layout = Layout( width='auto') # override the default width of the button to 'auto' to let the button grow
box_layout = Layout(display='flex',
flex_flow='column',
align_items='stretch',
border='solid',
width='50%')
words = ['correct', 'horse', 'battery', 'staple']
items = [Button(description=word, layout=items_layout, button_style='danger') for word in words]
box = Box(children=items, layout=box_layout)
box
###Output
_____no_output_____
###Markdown
**Three buttons in an HBox. Items flex proportionally to their weight.**
###Code
from ipywidgets import Layout, Button, Box, VBox
# Items flex proportionally to the weight and the left over space around the text
items_auto = [
Button(description='weight=1; auto', layout=Layout(flex='1 1 auto', width='auto'), button_style='danger'),
Button(description='weight=3; auto', layout=Layout(flex='3 1 auto', width='auto'), button_style='danger'),
Button(description='weight=1; auto', layout=Layout(flex='1 1 auto', width='auto'), button_style='danger'),
]
# Items flex proportionally to the weight
items_0 = [
Button(description='weight=1; 0%', layout=Layout(flex='1 1 0%', width='auto'), button_style='danger'),
Button(description='weight=3; 0%', layout=Layout(flex='3 1 0%', width='auto'), button_style='danger'),
Button(description='weight=1; 0%', layout=Layout(flex='1 1 0%', width='auto'), button_style='danger'),
]
box_layout = Layout(display='flex',
flex_flow='row',
align_items='stretch',
width='70%')
box_auto = Box(children=items_auto, layout=box_layout)
box_0 = Box(children=items_0, layout=box_layout)
VBox([box_auto, box_0])
###Output
_____no_output_____
###Markdown
**A more advanced example: a reactive form.**The form is a `VBox` of width '50%'. Each row in the VBox is an HBox, that justifies the content with space between..
###Code
from ipywidgets import Layout, Button, Box, FloatText, Textarea, Dropdown, Label, IntSlider
form_item_layout = Layout(
display='flex',
flex_flow='row',
justify_content='space-between'
)
form_items = [
Box([Label(value='Age of the captain'), IntSlider(min=40, max=60)], layout=form_item_layout),
Box([Label(value='Egg style'),
Dropdown(options=['Scrambled', 'Sunny side up', 'Over easy'])], layout=form_item_layout),
Box([Label(value='Ship size'),
FloatText()], layout=form_item_layout),
Box([Label(value='Information'),
Textarea()], layout=form_item_layout)
]
form = Box(form_items, layout=Layout(
display='flex',
flex_flow='column',
border='solid 2px',
align_items='stretch',
width='50%'
))
form
###Output
_____no_output_____
###Markdown
**A more advanced example: a carousel.**
###Code
from ipywidgets import Layout, Button, Box, Label
item_layout = Layout(height='100px', min_width='40px')
items = [Button(layout=item_layout, description=str(i), button_style='warning') for i in range(40)]
box_layout = Layout(overflow_x='scroll',
border='3px solid black',
width='500px',
height='',
flex_flow='row',
display='flex')
carousel = Box(children=items, layout=box_layout)
VBox([Label('Scroll horizontally:'), carousel])
###Output
_____no_output_____
###Markdown
*Compatibility note*The `overflow_x` and `overflow_y` options are deprecated in ipywidgets `7.5`. Instead, use the shorthand property `overflow='scroll hidden'`. The first part specificies overflow in `x`, the second the overflow in `y`. A widget for exploring layout optionsThe widgets below was written by ipywidgets user [Doug Redden (@DougRzz)](https://github.com/DougRzz). If you want to look through the source code to see how it works, take a look at this [notebook he contributed](cssJupyterWidgetStyling-UI.ipynb).Use the dropdowns and sliders in the widget to change the layout of the box containing the five colored buttons. Many of the CSS layout optoins described above are available, and the Python code to generate a `Layout` object reflecting the settings is in a `TextArea` in the widget.
###Code
from layout_preview import layout
layout
###Output
_____no_output_____ |
4.1 Lab Advanced ML Bigquery.ipynb | ###Markdown
Advanced Model Training Workflow with TensorFlow 2.0 Project Setup
###Code
pip install tensorflow-gpu==2.0.0-rc0
import numpy as np
import pandas as pd
import tensorflow as tf
import time
tf.__version__
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
###Output
Authenticated
###Markdown
Staging Data
###Code
%%bigquery flights_df --project tensorflow-ml-course --verbose
SELECT
-- Departure delay
departure_delay,
-- Distance
distance,
-- Airlines
airline,
-- Airports
departure_airport,
arrival_airport,
-- Date information
CAST(EXTRACT(DAYOFWEEK FROM departure_date) AS STRING) as departure_weekday,
CAST(EXTRACT(MONTH FROM departure_date) AS STRING) as departure_month,
-- Target column
CASE WHEN (arrival_delay >= 15) THEN 1 ELSE 0 END as delayed
FROM (
-- Inner Query
SELECT
departure_delay,
ROUND(ST_DISTANCE(ST_GEOGPOINT(departure_lon, departure_lat), ST_GEOGPOINT(arrival_lon, arrival_lat))/1000) as distance,
airline,
arrival_airport,
departure_airport,
PARSE_DATE("%Y-%m-%d", date) AS departure_date,
arrival_delay
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE date >= '2009-01-01'
AND date <= '2009-12-31'
AND departure_delay > 0
)
%%bigquery high_traffic_airports --project tensorflow-ml-course --verbose
SELECT * FROM
(SELECT departure_airport as airport_code,
COUNT(*) as flights
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE date >= '2009-01-01'
AND date <= '2009-12-31'
GROUP BY departure_airport
ORDER BY airport_code)
WHERE flights > 10000
%%bigquery airline_codes --project tensorflow-ml-course --verbose
SELECT DISTINCT(airline)
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE date >= '2009-01-01'
AND date <= '2009-12-31'
ORDER BY airline
flights_df.shape
flights_df.sample(n = 5)
flights_df.dtypes
###Output
_____no_output_____
###Markdown
Data Preprocessing Training-Testing-Split
###Code
train_df = flights_df.sample(frac=0.8,random_state=123)
test_df = flights_df.drop(train_df.index)
print(len(train_df), 'train examples')
print(len(test_df), 'test examples')
###Output
1841866 train examples
460466 test examples
###Markdown
Check Label distribution
###Code
print(round(flights_df.delayed.mean(),3)*100, '% delay in total dataset')
print(round(train_df.delayed.mean(),3)*100, '% delay in total dataset')
print(round(test_df.delayed.mean(),3)*100, '% delay in total dataset')
###Output
45.1 % delay in total dataset
45.1 % delay in total dataset
45.0 % delay in total dataset
###Markdown
Create input pipeline using tf.data Build a tf.data.Dataset Create a Batch Dataset from a Pandas Dataframe
###Code
def dataframe_to_dataset(dataframe, labels = 'delayed', shuffle=True, batch_size=32):
# Creates a tf.data dataset from a Pandas Dataframe
dataframe = dataframe.copy()
labels = dataframe.pop(labels)
dataset = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
dataset = dataset.shuffle(buffer_size=len(dataframe))
dataset = dataset.batch(batch_size)
return dataset
batch_size = 256
tf.keras.backend.set_floatx('float64')
train_ds = dataframe_to_dataset(train_df, batch_size=batch_size)
test_ds = dataframe_to_dataset(test_df, shuffle=False, batch_size=batch_size)
train_ds
###Output
_____no_output_____
###Markdown
The dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe. Build Features using tf.feature_column Demo for numeric variables:
###Code
example_batch = next(iter(train_ds))[0]
departure_delay = tf.feature_column.numeric_column("departure_delay")
feature_layer_demo = tf.keras.layers.DenseFeatures(departure_delay)
feature_layer_demo(example_batch).numpy()[:5]
###Output
_____no_output_____
###Markdown
Demo for bucketized variables:
###Code
departure_delay_bucketized = tf.feature_column.bucketized_column(departure_delay, boundaries = [2, 3, 6, 9, 13, 19, 28, 44, 76])
feature_layer_demo = tf.keras.layers.DenseFeatures(departure_delay_bucketized)
feature_layer_demo(example_batch).numpy()[:5]
###Output
_____no_output_____
###Markdown
Setting Bins for numeric and vocabularies for categorical variables
###Code
departure_delay_bins = [2, 3, 6, 9, 13, 19, 28, 44, 76]
distance_bins = [600, 1200]
airports_voc = high_traffic_airports['airport_code']
airlines_voc = airline_codes['airline']
weekdays_voc = ['1', '2', '3', '4', '5', '6', '7']
months_voc = ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12']
###Output
_____no_output_____
###Markdown
Build the input pipeline
###Code
feature_columns = []
# bucketized columns
distance = tf.feature_column.numeric_column("distance")
distance_buckets = tf.feature_column.bucketized_column(distance, boundaries = distance_bins)
feature_columns.append(distance_buckets)
departure_delay = tf.feature_column.numeric_column("departure_delay")
departure_delay_buckets = tf.feature_column.bucketized_column(departure_delay, boundaries = departure_delay_bins)
feature_columns.append(departure_delay_buckets)
# categorical columns
arrival_airports = tf.feature_column.categorical_column_with_vocabulary_list('arrival_airport', airports_voc)
arrival_airports_dummy = tf.feature_column.indicator_column(arrival_airports)
feature_columns.append(arrival_airports_dummy)
departure_airports = tf.feature_column.categorical_column_with_vocabulary_list('departure_airport', airports_voc)
departure_airports_dummy = tf.feature_column.indicator_column(departure_airports)
feature_columns.append(departure_airports_dummy)
airlines = tf.feature_column.categorical_column_with_vocabulary_list('airline', airlines_voc)
airlines_dummy = tf.feature_column.indicator_column(airlines)
feature_columns.append(airlines_dummy)
weekdays = tf.feature_column.categorical_column_with_vocabulary_list('departure_weekday', weekdays_voc)
weekdays_dummy = tf.feature_column.indicator_column(weekdays)
feature_columns.append(weekdays_dummy)
months = tf.feature_column.categorical_column_with_vocabulary_list('departure_month', months_voc)
months_dummy = tf.feature_column.indicator_column(months)
feature_columns.append(months_dummy)
feature_layer_demo = tf.keras.layers.DenseFeatures(feature_columns)
feature_layer_demo(example_batch).shape
feature_layer_demo(example_batch).numpy()[:1]
###Output
_____no_output_____
###Markdown
Defining our model Define the feature layer
###Code
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
###Output
_____no_output_____
###Markdown
Build the model Non-distributed model
###Code
model_normal = tf.keras.models.Sequential([
feature_layer,
tf.keras.layers.Dense(1, activation='sigmoid', kernel_regularizer=tf.keras.regularizers.l2(0.0001))
])
model_normal.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy']
)
###Output
_____no_output_____
###Markdown
Defining the Distribution Strategy Mirrored Strategy  Multi-Workers Mirrored Strategy  Creating the Mirrored Strategy instance
###Code
distribute = tf.distribute.MirroredStrategy()
###Output
_____no_output_____
###Markdown
Distributed Training Defining a distributed model
###Code
with distribute.scope():
model_distributed = tf.keras.models.Sequential([
feature_layer,
tf.keras.layers.Dense(1, activation='sigmoid', kernel_regularizer=tf.keras.regularizers.l2(0.0001))
])
model_distributed.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy']
)
###Output
_____no_output_____
###Markdown
Training the model: Normal vs. distributed training Normal Training
###Code
start_time = time.time()
history = model_normal.fit(train_ds,
epochs = 5,
callbacks = [tf.keras.callbacks.TensorBoard("logs/normal_training")])
print("Normal training took: {}".format(time.time() - start_time))
###Output
W0904 19:15:35.735445 140223078004608 base_layer.py:1772] Layer dense is casting an input tensor from dtype float32 to the layer's dtype of float64, which is new behavior in TensorFlow 2. The layer has dtype float64 because it's dtype defaults to floatx.
If you intended to run this layer in float64, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float32 by default, call `tf.keras.backend.set_floatx('float32')`. To change just this layer, pass dtype='float32' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
###Markdown
Distributed Training
###Code
start_time = time.time()
history = model_distributed.fit(train_ds,
epochs = 5,
callbacks = [tf.keras.callbacks.TensorBoard("logs/distributed_training")])
print("Distributed training took: {}".format(time.time() - start_time))
###Output
W0904 19:21:32.483937 140223078004608 base_layer.py:1772] Layer dense_1 is casting an input tensor from dtype float32 to the layer's dtype of float64, which is new behavior in TensorFlow 2. The layer has dtype float64 because it's dtype defaults to floatx.
If you intended to run this layer in float64, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float32 by default, call `tf.keras.backend.set_floatx('float32')`. To change just this layer, pass dtype='float32' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
|
notebooks/C_Generate_isoprenol_concentrations.ipynb | ###Markdown
Notebook C: Calculation of isoprenol concentrations for different strain designs This notebook takes the initial designs previously suggested by ART and uses OMG to create the corresponding final isoprenol concentrations. These concentrations, along with the designs will be later used by ART to make predictions and recommend new designs. Tested using **biodesign_3.7** kernel on jprime.lbl.gov server. It requires the cplex library for running the MOMA optimization. Inputs and outputs Required files to run this notebook: - A modified E. coli model with the isoprenol pathway added to it (`iJO1366_MVA.json` file in the `../data/models` directory) - A set of designs (e.g. `../data/ice_mo_strains.csv` exported from **ICE**) containing the details of which reactions are either: - (0) eliminated - (1) included - (2) doubled the flux Files generated by running this notebook:- `EDD_experiment_description_file_BE_designs.csv`- `EDD_isoprenol_production.csv`The files are stored in the user defined directory. Setup Clone the git repository with the `OMG` library:`git clone https://github.com/JBEI/OMG.git`or pull the latest version. Importing needed libraries:
###Code
import sys
import os
sys.path.insert(1, '../../OMG')
sys.path.append('../')
import cobra
import pandas as pd
import omg
from plot_multiomics import *
from tqdm import tqdm
###Output
_____no_output_____
###Markdown
User parameters
###Code
user_params = {
'host': 'ecoli', # ecoli or ropacus
'modelfile': '../data/models/iJO1366_MVA.json',
'cerevisiae_modelfile': '../data/models/iMM904.json',
'timestart': 0.0,
'timestop': 8.0,
'numtimepoints': 9,
'designsfile': 'ice_mo_strains.csv',
'designsfilepath': '../data/',
'mapping_file': '../mapping/inchikey_to_cid.txt',
'output_file_path': '../data/omg_output',
'edd_omics_file_path': '../data/omg_output/edd/',
'numreactions': 8,
'numinstances': 96,
'ext_metabolites': {
'glc__D_e': 22.203,
'nh4_e': 18.695,
'pi_e': 69.454,
'so4_e': 2.0,
'mg2_e': 2.0,
'k_e': 21.883,
'na1_e': 103.7,
'cl_e': 27.25,
'isoprenol_e': 0.0,
'ac_e': 0.0,
'for_e': 0.0,
'lac__D_e': 0.0,
'etoh_e': 0.0
},
'initial_OD': 0.01,
'BIOMASS_REACTION_ID': 'BIOMASS_Ec_iJO1366_core_53p95M'
}
###Output
_____no_output_____
###Markdown
Using the OMG library functions for creating synthetic multiomics data 1) Getting and preparing the metabolic model First we obtain the metabolic model:
###Code
file_name = user_params['modelfile']
model = cobra.io.load_json_model(file_name)
###Output
_____no_output_____
###Markdown
We now add minimum flux constraints for production of isoprenol and formate, and we limit oxygen intake:
###Code
iso = 'EX_isoprenol_e'
iso_cons = model.problem.Constraint(model.reactions.EX_isoprenol_e.flux_expression,lb = 0.20)
model.add_cons_vars(iso_cons)
for_cons = model.problem.Constraint(model.reactions.EX_for_e.flux_expression,lb = 0.10)
model.add_cons_vars(for_cons)
o2_cons = model.problem.Constraint(model.reactions.EX_o2_e.flux_expression,lb = -8.0)
model.add_cons_vars(o2_cons)
###Output
_____no_output_____
###Markdown
And then we constrain several central carbon metabolism fluxes to more realistic upper and lower bounds:
###Code
CC_rxn_names = ['ACCOAC','MDH','PTAr','CS','ACACT1r','PPC','PPCK','PFL']
for reaction in CC_rxn_names:
reaction_constraint = model.problem.Constraint(model.reactions.get_by_id(reaction).flux_expression,lb = -1.0,ub = 1.0)
model.add_cons_vars(reaction_constraint)
###Output
_____no_output_____
###Markdown
We also create a similar model with a higher production of isoprenol, which we will use with MOMA to simulate bioengineered strains:
###Code
modelHI = model.copy()
iso_cons = modelHI.problem.Constraint(modelHI.reactions.EX_isoprenol_e.flux_expression,lb = 0.25)
modelHI.add_cons_vars(iso_cons)
###Output
_____no_output_____
###Markdown
2) Obtaining times series for the wild type First create the time grid for simulation:
###Code
t0 = user_params['timestart']
tf = user_params['timestop']
points = user_params['numtimepoints']
tspan, delt = np.linspace(t0, tf, points, dtype='float64', retstep=True)
grid = (tspan, delt)
###Output
_____no_output_____
###Markdown
We then use this model to obtain the times series for fluxes, OD and external metabolites:
###Code
solution_TS, model_TS, cell, Emets, Erxn2Emet = \
omg.get_flux_time_series(model, user_params['ext_metabolites'], grid, user_params)
###Output
0.0 optimal 0.5363612610171437
1.0 optimal 0.5363612610171437
2.0 optimal 0.5363612610171437
3.0 optimal 0.5363612610171437
4.0 optimal 0.5363612610171437
5.0 optimal 0.5363612610171437
6.0 optimal 0.5363612610171437
7.0 optimal 0.5363612610171437
8.0 optimal 0.5363612610171437
###Markdown
We perform the same calculation for the model with higher isoprenol production that we created above:
###Code
solutionHI_TS, modelHI_TS, cellHI, EmetsHI, Erxn2EmetHI = \
omg.get_flux_time_series(modelHI, user_params['ext_metabolites'], grid, user_params)
###Output
0.0 optimal 0.5352266385352652
1.0 optimal 0.5352266385352652
2.0 optimal 0.5352266385352652
3.0 optimal 0.5352266385352652
4.0 optimal 0.5352266385352652
5.0 optimal 0.5352266385352652
6.0 optimal 0.5352266385352652
7.0 optimal 0.5352266385352652
8.0 optimal 0.5352266385352652
###Markdown
3) Getting bioengineered flux profiles through MOMA First obtain the file from ICE with suggested designs (i.e. reactions kos and overexpressions):
###Code
designs_df = pd.read_csv(f'{user_params["designsfilepath"]}/{user_params["designsfile"]}',
usecols=['Part ID', 'Name', 'Summary'])
designs_df.columns = ['Part ID','Line Name','Line Description']
designs_df2 = designs_df.copy() # make copy for creating EDD experiment description file later
designs_df[:2]
###Output
_____no_output_____
###Markdown
Storing information from ICE line description into a dataframe. In order to work with the ICE line descriptions we need to change it from its string format into numerical format into a dataframe (see below). First, let's add columns for each reaction:
###Code
reactions = designs_df['Line Description'][0].split('_')[::2]
for rxn in reactions:
designs_df[rxn] = None
###Output
_____no_output_____
###Markdown
And then assign values for each reaction and line
###Code
for i in range(len(designs_df)):
if designs_df['Line Name'][i]=='WT':
designs_df.loc[i][reactions] = [1 for r in range(len(reactions))]
else:
values = designs_df.loc[i]['Line Description'].split('_')[1::2]
designs_df.loc[i][reactions] = [float(value) for value in values]
designs_df = designs_df.drop(columns=['Line Description','Part ID'])
###Output
_____no_output_____
###Markdown
The final dataframe involves the line name and numerical multiplier that we will use to simulate the bioengineered strains.Each design (line) involves the modification of up to 8 fluxes (1 -> keep the same; 2-> double flux, 0-> knock reaction out):
###Code
designs_df.tail()
###Output
_____no_output_____
###Markdown
Creating time series of flux profiles for each bioengineered strain We then use MOMA to calculate flux profiles at each time point for the bioengineered strains as indicated by the designs in this data frame (takes around 1 min per design). Instead of using the solution time series corresponding to the initial model, we use the solution time series corresponding to the higher production. The reason is that, otherwise, we would never see an increase in isoprenol production, since MOMA minimizes the changes in flux by design. Our goal here is just to create varied flux profiles that ART can learn from. This is a **long calculation** (~1.5 hrs):
###Code
%%time
solutionsMOMA_TS = {}
cols = ['Line Name']
cols.extend(reactions)
if user_params['numinstances'] not in [None, 0]:
num_strains = user_params['numinstances']
else:
num_strains = designs_df.shape[0]
for i in tqdm(range(num_strains)): # Added counter bar here
design = designs_df[cols].loc[i]
if design['Line Name']=='WT':
solutionsMOMA_TS[i] = omg.getBEFluxes(model_TS, design, solution_TS, grid)
else:
solutionsMOMA_TS[i] = omg.getBEFluxes(model_TS, design, solutionHI_TS, grid)
###Output
100%|██████████| 96/96 [1:30:58<00:00, 56.86s/it]
###Markdown
As a sanity check, we can verify that the knocked out fluxes are zero:
###Code
i = 0
print(designs_df.loc[i,:], '\n')
for rxn in ['CS','PPC','PPCK','PFL']:
print(f'{rxn}: {solutionsMOMA_TS[i][5].fluxes[rxn]}')
###Output
Line Name Strain 1
ACCOAC 1
MDH 1
PTAr 2
CS 0
ACACT1r 2
PPC 0
PPCK 0
PFL 0
Name: 0, dtype: object
CS: 0.0
PPC: 0.0
PPCK: 0.0
PFL: 0.0
###Markdown
4) Producing the external metabolite concentrations for each bioengineered strain Here we use the `integrate_fluxes` function in OMG to produce the external metabolite concentrations which are the consequence of the calculated fluxes:
###Code
cellsEmetsBE = {}
for i in range(num_strains):
cell, Emets = omg.integrate_fluxes(solutionsMOMA_TS[i], model_TS, user_params['ext_metabolites'], grid, user_params)
cellsEmetsBE[i] = (cell, Emets)
###Output
_____no_output_____
###Markdown
We can check we obtain the same result with this function for the wild type as we did before in notebook A:
###Code
cellWT, EmetsWT = omg.integrate_fluxes(solution_TS, model_TS, user_params['ext_metabolites'], grid, user_params)
EmetsWT
plot_DO_extmets(cellWT, EmetsWT[['glc__D_e','isoprenol_e','ac_e','for_e','lac__D_e','etoh_e']])
###Output
_____no_output_____
###Markdown
And compare these growth and production profiles with any other bioengineered strain:
###Code
i = 2
cellBE, EmetsBE = cellsEmetsBE[i]
plot_DO_extmets(cellBE, EmetsBE[['glc__D_e','isoprenol_e','ac_e','for_e','lac__D_e','etoh_e']])
EmetsBE
###Output
_____no_output_____
###Markdown
5) Creating a file with isoprenol concentrations for EDD import and training ART Firstly, let's collect all isoprenol production values in a single list:
###Code
production = []
for i in range(user_params['numinstances']):
cell, Emets = cellsEmetsBE[i]
production.append(Emets.loc[user_params['numtimepoints'],'isoprenol_e'])
###Output
_____no_output_____
###Markdown
Then, let's create a new data frame and append the production values for each strain/line:
###Code
production_df = designs_df.copy()
production_df['Isoprenol'] = pd.Series(production)
production_df.loc[0:2,:]
###Output
_____no_output_____
###Markdown
The maximum production is higher than for the original (WT) strain (0.462):
###Code
np.max(production_df['Isoprenol'])
###Output
_____no_output_____
###Markdown
Reformat for export as EDD input file Remove not needed columns:
###Code
production_edd_df = production_df.drop(columns=reactions).copy()
###Output
_____no_output_____
###Markdown
Rename isoprenol column:
###Code
isoprenol_cid = 'CID:12988'
production_edd_df = production_edd_df.rename(columns={'Isoprenol': isoprenol_cid})
###Output
_____no_output_____
###Markdown
Pivot the dataframe for EDD format:
###Code
production_edd_df = production_edd_df.set_index('Line Name').stack().reset_index()
production_edd_df.columns = ['Line Name', 'Measurement Type', 'Value']
###Output
_____no_output_____
###Markdown
Add Time and Units columns:
###Code
production_edd_df['Time'] = 9.0
production_edd_df['Units'] = 'mM'
production_edd_df.head()
###Output
_____no_output_____
###Markdown
Save the dataframe as csv:
###Code
production_file_name = f'{user_params["edd_omics_file_path"]}/EDD_isoprenol_production.csv'
production_edd_df.to_csv(production_file_name, index=False)
###Output
_____no_output_____
###Markdown
Create experiment description file for EDD We then create the `EDD_experiment_description_file_BE_designs.csv` file for the import of data into EDD:
###Code
experiment_description_file_name = f'{user_params["edd_omics_file_path"]}/EDD_experiment_description_file_BE_designs.csv'
with open(experiment_description_file_name, 'w') as fh:
fh.write('Part ID, Line Name, Line Description, Media, Shaking Speed, Starting OD, Culture Volume, Flask Volume, Growth Temperature, Replicate Count\n')
for i in range(len(designs_df2)):
fh.write(f"{designs_df2.loc[i]['Part ID']}, \
{designs_df2.loc[i]['Line Name']}, \
{designs_df2.loc[i]['Line Description']}, \
M9, 1, 0.1, 50, 200, 30, 1\n")
###Output
_____no_output_____ |
3DFeatures_without_average.ipynb | ###Markdown
Function wrappers
###Code
def extract_no_avg_3Dfeatures(path,mid_window=0.15,mid_step=0.15,short_window = 0.05,short_step=0.025,steps = 100):
features, class_names, file_names = aF.multiple_directory_3Dfeature_extraction_no_avg(path,mid_step,mid_step,short_window,short_step,steps)
feature_matrix, labels = aT.features_to_matrix(features)
return feature_matrix,labels,class_names
def threeD_data_store(input,output_name):
# The 2nd and 3rd dimensional are folded.
reshaped_input = input.reshape(input.shape[0],-1)
np.savetxt(output_name, reshaped_input,delimiter=",")
def threeD_data_load(input,output,third_dimension_size):
loaded_data = np.loadtxt(input)
Restored_data = loaded_data.reshape(loaded_data.shape[0], loaded_data.shape[1] // third_dimension_size, third_dimension_size)
return Restored_data
training_path = ['/home/test/Speech/Wang/dataset/Trainingsets/angry',
'/home/test/Speech/Wang/dataset/Trainingsets/happy'
# '/home/test/Speech/Wang/dataset/Trainingsets/sad'
]
testing_path = ['/home/test/Speech/Wang/dataset/testsets/angry',
'/home/test/Speech/Wang/dataset/testsets/happy'
# '/home/test/Speech/Wang/dataset/testsets/sad'
]
###Output
_____no_output_____
###Markdown
Extract the features
###Code
X_train,y_train,class_names=extract_no_avg_3Dfeatures(training_path)
X_test,y_test,class_names=extract_no_avg_3Dfeatures(testing_path)
y_train = tf.keras.utils.to_categorical(y_train, len(training_path),dtype='float32')
y_test = tf.keras.utils.to_categorical(y_test, len(testing_path),dtype='float32')
print('The dimension of training set',X_train.shape)
print('The dimension of testing set',X_test.shape)
print('The dimentsion of y_train',y_train.shape)
###Output
The dimension of training set (652, 100, 136)
The dimension of testing set (91, 100, 136)
The dimentsion of y_train (652, 2)
###Markdown
Store datasets
###Code
threeD_data_store(X_train,'X_training.csv')
np.savetxt('y_training.csv',y_train,delimiter=",")
threeD_data_store(X_test,'X_testing.csv')
np.savetxt('y_testing.csv',y_test,delimiter=",")
###Output
_____no_output_____ |
notebooks/embedding_plugin.ipynb | ###Markdown
HugeCTR Embedding Plugin for TensorFlow This notebook introduces a TensorFlow (TF) plugin for the HugeCTR embedding layer, embedding_plugin, where users may benefit from both the computational efficiency of the HugeCTR embedding layer and the ease of use of TensorFlow (TF). What is new - Support `Localized` embedding.- No need to split DNN model into two sub-models, which means embedding layer can be put inside the scope of MirroredStrategy. Check Docker Container Please make sure that you have started the notebook inside the running NGC docker container: `nvcr.io/nvstaging/merlin/merlin-tensorflow-training:0.5`. Several dynamic libraries have been installed to the system path `/usr/local/hugectr/lib/` that you'll have to load using TensorFlow. For convenient usage, you can directly import `hugectr_tf_ops_v2.py`, where we prepare the codes to load that dynamic library and wrap some operations, in your python script to be used with the embedding_plugin. Verify Accuracy To verify whether the embedding_plugin can obtain correct result, you can generate synthetic data for testing purposes as shown below.
###Code
# run this cell to clear all variables.
%reset -f
# import tensorflow and some modules
import tensorflow as tf
# do not let TF allocate all GPU memory
devices = tf.config.list_physical_devices("GPU")
for dev in devices:
tf.config.experimental.set_memory_growth(dev, True)
import numpy as np
# import hugectr_tf_ops.py to use embedding_plugin ops
import sys
sys.path.append("../tools/embedding_plugin/python/")
import hugectr_tf_ops_v2
# generate a random embedding table and show
vocabulary_size = 8
slot_num = 3
embedding_vector_size = 4
table = np.float32([i for i in range(1, vocabulary_size * embedding_vector_size + 1)]).reshape(vocabulary_size, embedding_vector_size)
print("init embedding table value:\n", table)
###Output
init embedding table value:
[[ 1. 2. 3. 4.]
[ 5. 6. 7. 8.]
[ 9. 10. 11. 12.]
[13. 14. 15. 16.]
[17. 18. 19. 20.]
[21. 22. 23. 24.]
[25. 26. 27. 28.]
[29. 30. 31. 32.]]
###Markdown
In HugeCTR, the corresponding dense shape of the input keys is `[batch_size, slot_num, max_nnz]`, and `0` is a valid key. Therefore, `-1` is used to denote invalid keys, which only occupy that position in the corresponding dense keys matrix.
###Code
# generate random keys to lookup from embedding table.
keys = np.array([[[0, -1], # nnz = 1
[1, -1], # nnz = 1
[2, 6]], # nnz = 2
[[0, -1], # nnz = 1
[1, -1], # nnz = 1
[-1, -1]], # nnz = 0
[[0, -1], # nnz = 1
[1, -1], # nnz = 1
[6, -1]], # nnz = 1
[[0, -1], # nnz = 1
[1, -1], # nnz = 1
[2, -1]]], # nnz = 1
dtype=np.int64)
print("the dense shape of inputs keys:", keys.shape)
# define a simple forward propagation and backward propagation with embedding_plugin
# NOTE: cause hugectr_tf_ops_v2.init() can only be called once,
# if you want to run this cell multi-times, please restart the kernel,
# or explicitly release embedding_plugin resources by calling hugectr_tf_ops_v2.reset()
# try release embedding plugin resources.
hugectr_tf_ops_v2.reset()
# hugectr_tf_ops embedding_plugin initialize
hugectr_tf_ops_v2.init(visible_gpus=[0], seed=0, key_type='int64', value_type='float', batch_size=4, batch_size_eval=4)
# create a distributed embedding_layer with embedding_plugin
dis_embedding_name = hugectr_tf_ops_v2.create_embedding(init_value=table, opt_hparams=[0.1, 0.9, 0.99, 1e-3],
name_='embedding_verification',
max_vocabulary_size_per_gpu=vocabulary_size,
slot_num=slot_num, embedding_vec_size=embedding_vector_size,
embedding_type='distributed', max_nnz=2)
# create a localized embedding_layer with embedding_plugin
loc_embedding_name = hugectr_tf_ops_v2.create_embedding(init_value=table, opt_hparams=[0.1, 0.9, 0.99, 1e-3],
name_='embedding_verification',
max_vocabulary_size_per_gpu=vocabulary_size,
slot_num=slot_num, embedding_vec_size=embedding_vector_size,
embedding_type='localized', max_nnz=2, update_type='Global')
# convert dense input keys to COO format
reshape_keys = tf.reshape(keys, [-1, keys.shape[-1]])
indices = tf.where(reshape_keys != -1)
values = tf.gather_nd(reshape_keys, indices)
row_indices = tf.transpose(indices, perm=[1, 0])
# create a Variable used for backward propagation
bp_trigger = tf.Variable(initial_value=1.0, trainable=True, dtype=tf.float32)
with tf.GradientTape(persistent=True) as tape:
tape.watch(bp_trigger)
# get distributed embedding forward result
dis_each_replicas = hugectr_tf_ops_v2.broadcast_then_convert_to_csr(dis_embedding_name, row_indices, values,
T = [tf.int32] * 1)
dis_forward_result = hugectr_tf_ops_v2.fprop(dis_embedding_name, 0, dis_each_replicas, bp_trigger, is_training=True)
print("Distributed Embedding first forward_result:\n", dis_forward_result, '\n')
# get localized embedding forward result
loc_each_replicas = hugectr_tf_ops_v2.broadcast_then_convert_to_csr(loc_embedding_name, row_indices, values,
T = [tf.int32] * 1)
loc_forward_result = hugectr_tf_ops_v2.fprop(loc_embedding_name, 0, loc_each_replicas, bp_trigger, is_training=True)
print("Localized Embedding first forward_result:\n", loc_forward_result, '\n')
# compute gradients & update params
dis_grads = tape.gradient(dis_forward_result, bp_trigger)
loc_grads = tape.gradient(loc_forward_result, bp_trigger)
# do second forward propagation to check whether embedding table is updated.
dis_forward_result_2 = hugectr_tf_ops_v2.fprop(dis_embedding_name, 0, dis_each_replicas, bp_trigger, is_training=True)
loc_forward_result_2 = hugectr_tf_ops_v2.fprop(loc_embedding_name, 0, loc_each_replicas, bp_trigger, is_training=True)
print("-"*100)
print("Distributed Embedding second forward_result:\n", dis_forward_result_2, '\n')
print("Localized Embedding second forward_result:\n", loc_forward_result_2, '\n')
# explicitly release embedding plugin resources
hugectr_tf_ops_v2.reset()
# similarly, use original tensorflow op to compare whether results are consistent.
# define a tf embedding layer
class EmbeddingLayer(tf.keras.layers.Layer):
def __init__(self, vocabulary_size, embedding_vec_size,
init_value):
super(EmbeddingLayer, self).__init__()
self.vocabulary_size = vocabulary_size
self.embedding_vec_size = embedding_vec_size
self.init_value = init_value
def build(self, _):
self.Var = self.add_weight(shape=(self.vocabulary_size, self.embedding_vec_size),
initializer=tf.constant_initializer(value=self.init_value))
def call(self, inputs):
return tf.nn.embedding_lookup_sparse(self.Var, inputs, sp_weights=None, combiner="sum")
with tf.GradientTape() as tape:
# reshape keys into [batch_size * slot_num, max_nnz]
reshape_keys = np.reshape(keys, newshape=(-1, keys.shape[-1]))
indices = tf.where(reshape_keys != -1)
values = tf.gather_nd(reshape_keys, indices)
# define a layer
tf_layer = EmbeddingLayer(vocabulary_size, embedding_vector_size, table)
# wrap input keys components into a SparseTensor
sparse_tensor = tf.sparse.SparseTensor(indices, values, reshape_keys.shape)
tf_forward = tf_layer(sparse_tensor)
print("tf forward_result:\n", tf.reshape(tf_forward, [keys.shape[0], keys.shape[1], tf_forward.shape[-1]]))
# define an optimizer
optimizer = tf.keras.optimizers.Adam(learning_rate=0.1, beta_1=0.9, beta_2=0.99, epsilon=1e-3)
# compute gradients & update params
grads = tape.gradient(tf_forward, tf_layer.trainable_weights)
optimizer.apply_gradients(zip(grads, tf_layer.trainable_weights))
# do second forward propagation to check whether params are updated.
tf_forward_2 = tf_layer(sparse_tensor)
print("\n")
print("tf second forward_result:\n", tf.reshape(tf_forward_2, [keys.shape[0], keys.shape[1], tf_forward_2.shape[-1]]))
# assert whether embedding_plugin's results are consistent with tensorflow original ops
# verify first forward results consistency
dis_first_forward_consistent = np.allclose(dis_forward_result.numpy(),
tf.reshape(tf_forward, [keys.shape[0], keys.shape[1], tf_forward.shape[-1]]).numpy())
loc_first_forward_consistent = np.allclose(loc_forward_result.numpy(),
tf.reshape(tf_forward, [keys.shape[0], keys.shape[1], tf_forward.shape[-1]]).numpy())
print("Consistent in first forward propagation for both Distributed & Localized Embedding?",
(dis_first_forward_consistent and loc_first_forward_consistent))
# verify second forward results consistency
dis_second_forward_consistent = np.allclose(dis_forward_result_2.numpy(),
tf.reshape(tf_forward_2, [keys.shape[0], keys.shape[1], tf_forward_2.shape[-1]]))
loc_second_forward_consistent = np.allclose(loc_forward_result_2.numpy(),
tf.reshape(tf_forward_2, [keys.shape[0], keys.shape[1], tf_forward_2.shape[-1]]))
print("Consistent in second forward propagation for both Distributed & Localized Embedding?",
(dis_second_forward_consistent and loc_second_forward_consistent))
###Output
Consistent in first forward propagation for both Distributed & Localized Embedding? True
Consistent in second forward propagation for both Distributed & Localized Embedding? True
###Markdown
The results from embedding_plugins and original TF ops are consistent in both first and second forward propagation for both `Distributed Embedding` and `Localized Embedding`, which means the embedding_plugin can get the same forward result and perform the same backward propagation as TF ops. Therefore, the embedding_plugin can obtain correct results. DeepFM demo In this notebook, TF 2.x is used to build the DeepFM model. Define Models with the Embedding_Plugin
###Code
# first, import tensorflow and import plugin ops from hugectr_tf_ops_v2.py
import tensorflow as tf
# do not let TF allocate all GPU memory
devices = tf.config.list_physical_devices("GPU")
for dev in devices:
tf.config.experimental.set_memory_growth(dev, True)
import sys
sys.path.append("../tools/embedding_plugin/python/")
import hugectr_tf_ops_v2
# define TF layers
class Multiply(tf.keras.layers.Layer):
def __init__(self, out_units):
super(Multiply, self).__init__()
self.out_units = out_units
def build(self, input_shape):
self.w = self.add_weight(name='weight_vector', shape=(input_shape[1], self.out_units),
initializer='glorot_uniform', trainable=True)
def call(self, inputs):
return inputs * self.w
# build DeepFM with plugin ops
class DeepFM_PluginEmbedding(tf.keras.models.Model):
def __init__(self,
vocabulary_size,
embedding_vec_size,
dropout_rate, # list of float
deep_layers, # list of int
initializer,
gpus,
batch_size,
batch_size_eval,
embedding_type = 'localized',
slot_num=1,
seed=123):
super(DeepFM_PluginEmbedding, self).__init__()
tf.keras.backend.clear_session()
tf.compat.v1.set_random_seed(seed)
self.vocabulary_size = vocabulary_size
self.embedding_vec_size = embedding_vec_size
self.dropout_rate = dropout_rate
self.deep_layers = deep_layers
self.gpus = gpus
self.batch_size = batch_size
self.batch_size_eval = batch_size_eval
self.slot_num = slot_num
self.embedding_type = embedding_type
if isinstance(initializer, str):
initializer = False
# when building model with embedding_plugin ops, init() should be called prior to any other ops.
hugectr_tf_ops_v2.init(visible_gpus=gpus, seed=seed, key_type='int64', value_type='float',
batch_size=batch_size, batch_size_eval=batch_size_eval)
# create a embedding_plugin layer
self.embedding_name = hugectr_tf_ops_v2.create_embedding(init_value=initializer, name_='hugectr_embedding',
embedding_type=embedding_type, optimizer_type='Adam',
max_vocabulary_size_per_gpu=(self.vocabulary_size // len(self.gpus)) + 1,
opt_hparams=[0.1, 0.9, 0.99, 1e-5], update_type='Local',
atomic_update=True, scaler=1.0, slot_num=self.slot_num,
max_nnz=1, max_feature_num=1*self.slot_num,
embedding_vec_size=self.embedding_vec_size + 1, combiner='sum')
# other layers with TF original ops
self.deep_dense = []
for i, deep_units in enumerate(self.deep_layers):
self.deep_dense.append(tf.keras.layers.Dense(units=deep_units, activation=None, use_bias=True,
kernel_initializer='glorot_normal',
bias_initializer='glorot_normal'))
self.deep_dense.append(tf.keras.layers.Dropout(dropout_rate[i]))
self.deep_dense.append(tf.keras.layers.Dense(units=1, activation=None, use_bias=True,
kernel_initializer='glorot_normal',
bias_initializer=tf.constant_initializer(0.01)))
self.add_layer = tf.keras.layers.Add()
self.y_act = tf.keras.layers.Activation(activation='sigmoid')
self.dense_multi = Multiply(1)
self.dense_embedding = Multiply(self.embedding_vec_size)
self.concat_1 = tf.keras.layers.Concatenate()
self.concat_2 = tf.keras.layers.Concatenate()
def build(self, _):
self.bp_trigger = self.add_weight(name='bp_trigger', shape=(1,), dtype=tf.float32, trainable=True)
@tf.function
def call(self, dense_feature, each_replica, training=True):
"""
forward propagation.
#arguments:
dense_feature: [batch_size, dense_dim]
"""
with tf.name_scope("embedding_and_slice"):
dense_0 = tf.cast(tf.expand_dims(dense_feature, 2), dtype=tf.float32) # [batchsize, dense_dim, 1]
dense_mul = self.dense_multi(dense_0) # [batchsize, dense_dim, 1]
dense_emb = self.dense_embedding(dense_0) # [batchsize, dense_dim, embedding_vec_size]
dense_mul = tf.reshape(dense_mul, [dense_mul.shape[0], -1]) # [batchsize, dense_dim * 1]
dense_emb = tf.reshape(dense_emb, [dense_emb.shape[0], -1]) # [batchsize, dense_dim * embedding_vec_size]
sparse = hugectr_tf_ops_v2.fprop(self.embedding_name, 0, #replica_ctx.replica_id_in_sync_group,
each_replica, self.bp_trigger, is_training=training) # [batch_size, self.slot_num, self.embedding_vec_size + 1]
sparse_1 = tf.slice(sparse, [0, 0, self.embedding_vec_size], [-1, self.slot_num, 1]) #[batchsize, slot_num, 1]
sparse_1 = tf.squeeze(sparse_1, 2) # [batchsize, slot_num]
sparse_emb = tf.slice(sparse, [0, 0, 0], [-1, self.slot_num, self.embedding_vec_size]) #[batchsize, slot_num, embedding_vec_size]
sparse_emb = tf.reshape(sparse_emb, [-1, self.slot_num * self.embedding_vec_size]) #[batchsize, slot_num * embedding_vec_size]
with tf.name_scope("FM"):
with tf.name_scope("first_order"):
first = self.concat_1([dense_mul, sparse_1]) # [batchsize, dense_dim + slot_num]
first_out = tf.reduce_sum(first, axis=-1, keepdims=True) # [batchsize, 1]
with tf.name_scope("second_order"):
hidden = self.concat_2([dense_emb, sparse_emb]) # [batchsize, (dense_dim + slot_num) * embedding_vec_size]
second = tf.reshape(hidden, [-1, dense_feature.shape[1] + self.slot_num, self.embedding_vec_size])
square_sum = tf.math.square(tf.math.reduce_sum(second, axis=1, keepdims=True)) # [batchsize, 1, embedding_vec_size]
sum_square = tf.math.reduce_sum(tf.math.square(second), axis=1, keepdims=True) # [batchsize, 1, embedding_vec_size]
second_out = 0.5 * (sum_square - square_sum) # [batchsize, 1, embedding_vec_size]
second_out = tf.math.reduce_sum(second_out, axis=-1, keepdims=False) # [batchsize, 1]
with tf.name_scope("Deep"):
for i, layer in enumerate(self.deep_dense):
if i % 2 == 0: # dense
hidden = layer(hidden)
else: # dropout
hidden = layer(hidden, training)
y = self.add_layer([hidden, first_out, second_out])
y = self.y_act(y) # [batchsize, 1]
return y
@property
def get_embedding_name(self):
return self.embedding_name
###Output
_____no_output_____
###Markdown
The above cells use embedding plugin ops and TF layers to define a TF DeepFM model. Similarly, define an embedding layer with TF original ops, and define a DeepFM model with that layer. Because embedding_plugin supports model parallelism, the parameters of the original TF embedding layer are equally distributed to each GPU for a fair performance comparison. Define Models with the Original TF Ops
###Code
# define a TF embedding layer with TF original ops
class OriginalEmbedding(tf.keras.layers.Layer):
def __init__(self,
vocabulary_size,
embedding_vec_size,
initializer='uniform',
combiner="sum",
gpus=[0]):
super(OriginalEmbedding, self).__init__()
self.vocabulary_size = vocabulary_size
self.embedding_vec_size = embedding_vec_size
if isinstance(initializer, str):
self.initializer = tf.keras.initializers.get(initializer)
else:
self.initializer = initializer
if combiner not in ["sum", "mean"]:
raise RuntimeError("combiner must be one of \{'sum', 'mean'\}.")
self.combiner = combiner
if (not isinstance(gpus, list)) and (not isinstance(gpus, tuple)):
raise RuntimeError("gpus must be a list or tuple.")
self.gpus = gpus
def build(self, _):
if isinstance(self.initializer, tf.keras.initializers.Initializer):
if len(self.gpus) > 1:
self.embeddings_params = list()
mod_size = self.vocabulary_size % len(self.gpus)
vocabulary_size_each_gpu = [(self.vocabulary_size // len(self.gpus)) + (1 if dev_id < mod_size else 0)
for dev_id in range(len(self.gpus))]
for i, gpu in enumerate(self.gpus):
with tf.device("/gpu:%d" %gpu):
params_i = self.add_weight(name="embedding_" + str(gpu),
shape=(vocabulary_size_each_gpu[i], self.embedding_vec_size),
initializer=self.initializer)
self.embeddings_params.append(params_i)
else:
self.embeddings_params = self.add_weight(name='embeddings',
shape=(self.vocabulary_size, self.embedding_vec_size),
initializer=self.initializer)
else:
self.embeddings_params = self.initializer
@tf.function
def call(self, keys, output_shape):
result = tf.nn.embedding_lookup_sparse(self.embeddings_params, keys,
sp_weights=None, combiner=self.combiner)
return tf.reshape(result, output_shape)
# define DeepFM model with original TF embedding layer
class DeepFM_OriginalEmbedding(tf.keras.models.Model):
def __init__(self,
vocabulary_size,
embedding_vec_size,
dropout_rate, # list of float
deep_layers, # list of int
initializer,
gpus,
batch_size,
batch_size_eval,
embedding_type = 'localized',
slot_num=1,
seed=123):
super(DeepFM_OriginalEmbedding, self).__init__()
tf.keras.backend.clear_session()
tf.compat.v1.set_random_seed(seed)
self.vocabulary_size = vocabulary_size
self.embedding_vec_size = embedding_vec_size
self.dropout_rate = dropout_rate
self.deep_layers = deep_layers
self.gpus = gpus
self.batch_size = batch_size
self.batch_size_eval = batch_size_eval
self.slot_num = slot_num
self.embedding_type = embedding_type
self.original_embedding_layer = OriginalEmbedding(vocabulary_size=vocabulary_size,
embedding_vec_size=embedding_vec_size + 1,
initializer=initializer, gpus=gpus)
self.deep_dense = []
for i, deep_units in enumerate(self.deep_layers):
self.deep_dense.append(tf.keras.layers.Dense(units=deep_units, activation=None, use_bias=True,
kernel_initializer='glorot_normal',
bias_initializer='glorot_normal'))
self.deep_dense.append(tf.keras.layers.Dropout(dropout_rate[i]))
self.deep_dense.append(tf.keras.layers.Dense(units=1, activation=None, use_bias=True,
kernel_initializer='glorot_normal',
bias_initializer=tf.constant_initializer(0.01)))
self.add_layer = tf.keras.layers.Add()
self.y_act = tf.keras.layers.Activation(activation='sigmoid')
self.dense_multi = Multiply(1)
self.dense_embedding = Multiply(self.embedding_vec_size)
self.concat_1 = tf.keras.layers.Concatenate()
self.concat_2 = tf.keras.layers.Concatenate()
@tf.function
def call(self, dense_feature, sparse_feature, training=True):
"""
forward propagation.
#arguments:
dense_feature: [batch_size, dense_dim]
sparse_feature: for OriginalEmbedding, it is a SparseTensor, and the dense shape is [batch_size * slot_num, max_nnz];
for PluginEmbedding, it is a list of [row_offsets, value_tensors, nnz_array].
"""
with tf.name_scope("embedding_and_slice"):
dense_0 = tf.cast(tf.expand_dims(dense_feature, 2), dtype=tf.float32) # [batchsize, dense_dim, 1]
dense_mul = self.dense_multi(dense_0) # [batchsize, dense_dim, 1]
dense_emb = self.dense_embedding(dense_0) # [batchsize, dense_dim, embedding_vec_size]
dense_mul = tf.reshape(dense_mul, [dense_mul.shape[0], -1]) # [batchsize, dense_dim * 1]
dense_emb = tf.reshape(dense_emb, [dense_emb.shape[0], -1]) # [batchsize, dense_dim * embedding_vec_size]
sparse = self.original_embedding_layer(sparse_feature, output_shape=[-1, self.slot_num, self.embedding_vec_size + 1])
sparse_1 = tf.slice(sparse, [0, 0, self.embedding_vec_size], [-1, self.slot_num, 1]) #[batchsize, slot_num, 1]
sparse_1 = tf.squeeze(sparse_1, 2) # [batchsize, slot_num]
sparse_emb = tf.slice(sparse, [0, 0, 0], [-1, self.slot_num, self.embedding_vec_size]) #[batchsize, slot_num, embedding_vec_size]
sparse_emb = tf.reshape(sparse_emb, [-1, self.slot_num * self.embedding_vec_size]) #[batchsize, slot_num * embedding_vec_size]
with tf.name_scope("FM"):
with tf.name_scope("first_order"):
first = self.concat_1([dense_mul, sparse_1]) # [batchsize, dense_dim + slot_num]
first_out = tf.reduce_sum(first, axis=-1, keepdims=True) # [batchsize, 1]
with tf.name_scope("second_order"):
hidden = self.concat_2([dense_emb, sparse_emb]) # [batchsize, (dense_dim + slot_num) * embedding_vec_size]
second = tf.reshape(hidden, [-1, dense_feature.shape[1] + self.slot_num, self.embedding_vec_size])
square_sum = tf.math.square(tf.math.reduce_sum(second, axis=1, keepdims=True)) # [batchsize, 1, embedding_vec_size]
sum_square = tf.math.reduce_sum(tf.math.square(second), axis=1, keepdims=True) # [batchsize, 1, embedding_vec_size]
second_out = 0.5 * (sum_square - square_sum) # [batchsize, 1, embedding_vec_size]
second_out = tf.math.reduce_sum(second_out, axis=-1, keepdims=False) # [batchsize, 1]
with tf.name_scope("Deep"):
for i, layer in enumerate(self.deep_dense):
if i % 2 == 0: # dense
hidden = layer(hidden)
else: # dropout
hidden = layer(hidden, training)
y = self.add_layer([hidden, first_out, second_out])
y = self.y_act(y) # [batchsize, 1]
return y
###Output
_____no_output_____
###Markdown
Dataset is needed to use these models for training. [Kaggle Criteo datasets](http://labs.criteo.com/2014/02/kaggle-display-advertising-challenge-dataset/) provided by CriteoLabs is used as the training dataset. The original training set contains 45,840,617 examples. Each example contains a label (0 by default or 1 if the ad was clicked) and 39 features in which 13 of them are integer and the other 26 are categorial. Since TFRecord is suitable for the training process and the Criteo dataset is missing numerous values across the feature columns, preprocessing is needed. The original test set won't be used because it doesn't contain labels. Dataset processing 1. Download dataset from [https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/](http://azuremlsampleexperiments.blob.core.windows.net/criteo/day_1.gz).2. Extract the dataset by running the following command. ```shell $ gunzip day_1.gz ``` 3. The whole dataset is too large, so get a subset with ```shell $ head -n 45840617 day_1 > train.txt ```4. Preprocess the datast and set missing values.Preprocessing functions are defined in [preprocess.py](../tools/embedding_plugin/performance_profile/preprocess.py). Open that file and check the codes.
###Code
# specify source csv name and output csv name, run this command will do the preprocessing.
# Warning: this command will take serveral hours to do preprocessing.
%run ../tools/embedding_plugin/performance_profile/preprocess.py \
--src_csv_path=../tools/embedding_plugin/train.txt \
--dst_csv_path=../tools/embedding_plugin/train.out.txt \
--normalize_dense=0 --feature_cross=0
###Output
_____no_output_____
###Markdown
5. Split the dataset by running the following commands:```shell$ head -n 36672493 train.out.txt > train$ tail -n 9168124 train.out.txt > valtest$ head -n 4584062 valtest > val$ tail -n 4584062 valtest > test``` 6. Convert the dataset to a TFRecord file. Converting functions are defined in [txt2tfrecord.py](../tools/embedding_plugin/performance_profile/txt2tfrecord.py). Open that file and check the codes.After the data preprocessing is completed, *.tfrecord file(s) will be generated, which can be used for training. The training loop can now be configured to use the dataset and models to perform the training.
###Code
# specify source name and output tfrecord name, run this command will do the converting.
# Warning: this command will take half an hour to do converting.
%run ../tools/embedding_plugin/performance_profile/txt2tfrecord.py \
--src_txt_name=train \
--dst_tfrecord_name=train.tfrecord \
--normalized=0 --use_multi_process=1 \
--shard_num=1
# if multi tfrecord files are wanted, set shard_num to the number of files.
###Output
_____no_output_____
###Markdown
Define training loop and do training In [read_data.py](../tools/embedding_plugin/performance_profile/read_data.py), some preprocessing and TF data reading pipeline creation functions are defined.
###Code
# set env path, so that some modules can be imported
sys.path.append("../tools/embedding_plugin/performance_profile/")
import txt2tfrecord as utils
from read_data import CreateDataset
import time
import logging
logging.basicConfig(format='%(asctime)s %(message)s')
logging.root.setLevel('INFO')
# choose wich model for training
which_model = "Plugin" # change it to "Original", if you want to try the model define with original tf ops.
# set some hyper parameters for training process
if ("Plugin" == which_model):
batch_size = 16384
n_epochs = 1
distribute_keys = 1
gpus = [0] # use GPU0
embedding_type = 'distributed'
vocabulary_size = 1737710
embedding_vec_size = 10
slot_num = 26
batch_size_eval = 1 * len(gpus)
elif ("Original" == which_model):
batch_size = 16384
n_epochs = 1
distribute_keys = 0
gpus = [0] # use GPU0
vocabulary_size = 1737710
embedding_vec_size = 10
slot_num = 26
batch_size_eval = 1 * len(gpus)
embedding_type = 'distributed'
# define feature_description to read tfrecord examples.
cols = [utils.idx2key(idx, False) for idx in range(0, utils.NUM_TOTAL_COLUMNS)]
feature_desc = dict()
for col in cols:
if col == 'label' or col.startswith("I"):
feature_desc[col] = tf.io.FixedLenFeature([], tf.int64) # scaler
else:
feature_desc[col] = tf.io.FixedLenFeature([1], tf.int64) # [slot_num, nnz]
# please set data_path to your tfrecord
data_path = "../tools/embedding_plugin/performance_profile/"
# create tfrecord reading pipeling
dataset_names = [data_path + "./train.tfrecord"]
dataset = CreateDataset(dataset_names=dataset_names,
feature_desc=feature_desc,
batch_size=batch_size,
n_epochs=n_epochs,
slot_num=slot_num,
max_nnz=1,
convert_to_csr=False,
gpu_count=len(gpus),
embedding_type=embedding_type,
get_row_indices=True)()
# define loss function and optimizer used in other TF layers.
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
loss_fn = tf.keras.losses.BinaryCrossentropy(from_logits=False)
# create model instance
if "Original" == which_model:
model = DeepFM_OriginalEmbedding(vocabulary_size=vocabulary_size, embedding_vec_size=embedding_vec_size,
embedding_type=embedding_type,
dropout_rate=[0.5] * 10, deep_layers=[1024] * 10,
initializer='uniform', gpus=gpus, batch_size=batch_size,
batch_size_eval=batch_size_eval,
slot_num=slot_num)
elif "Plugin" == which_model:
hugectr_tf_ops_v2.reset()
model = DeepFM_PluginEmbedding(vocabulary_size=vocabulary_size, embedding_vec_size=embedding_vec_size,
embedding_type=embedding_type,
dropout_rate=[0.5] * 10, deep_layers=[1024] * 10,
initializer='uniform', gpus=gpus, batch_size=batch_size,
batch_size_eval=batch_size_eval,
slot_num=slot_num)
# define training step
@tf.function
def _train_step(dense_batch, sparse_batch, y_batch, model, loss_fn, optimizer):
with tf.GradientTape() as tape:
y_batch = tf.cast(y_batch, dtype=tf.float32)
logits = model(dense_batch, sparse_batch, training=True)
loss = loss_fn(y_batch, logits)
loss /= dense_batch.shape[0]
grads = tape.gradient(loss, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
return loss
# training loop
logging.info("begin to train")
begin_time = time.time()
display_begin = begin_time
for step, datas in enumerate(dataset):
label, dense, others = datas[0], datas[1], datas[2:]
if "Original" == which_model:
sparse = others[-1]
elif "Plugin" == which_model:
sparse = others[0:2]
sparse = hugectr_tf_ops_v2.broadcast_then_convert_to_csr(model.get_embedding_name,
row_indices=sparse[0], values=sparse[1],
T=[tf.int32]*len(gpus))
train_loss = _train_step(dense, sparse, label, model, loss_fn, optimizer)
loss_value = train_loss.numpy()
if (step % 100 == 0 and step != 0):
display_end = time.time()
logging.info("step: %d, loss: %.7f, elapsed time: %.5f seconds." %(step, loss_value, (display_end - display_begin)))
display_begin = display_end
end_time = time.time()
logging.info("Train End. Elapsed Time: %.3f seconds." %(end_time - begin_time))
###Output
2021-01-30 07:52:25,346 begin to train
2021-01-30 07:52:37,596 step: 100, loss: 0.0000278, elapsed time: 12.24864 seconds.
2021-01-30 07:52:48,122 step: 200, loss: 0.0000301, elapsed time: 10.52632 seconds.
2021-01-30 07:52:59,111 step: 300, loss: 0.0000292, elapsed time: 10.98891 seconds.
2021-01-30 07:53:10,397 step: 400, loss: 0.0000298, elapsed time: 11.28664 seconds.
2021-01-30 07:53:21,045 step: 500, loss: 0.0000308, elapsed time: 10.64784 seconds.
2021-01-30 07:53:31,526 step: 600, loss: 0.0000298, elapsed time: 10.48030 seconds.
2021-01-30 07:53:41,712 step: 700, loss: 0.0000298, elapsed time: 10.18635 seconds.
2021-01-30 07:53:52,467 step: 800, loss: 0.0000304, elapsed time: 10.75503 seconds.
2021-01-30 07:54:03,011 step: 900, loss: 0.0000299, elapsed time: 10.54400 seconds.
2021-01-30 07:54:14,301 step: 1000, loss: 0.0000307, elapsed time: 11.28991 seconds.
2021-01-30 07:54:25,194 step: 1100, loss: 0.0000286, elapsed time: 10.89364 seconds.
2021-01-30 07:54:35,751 step: 1200, loss: 0.0000310, elapsed time: 10.55683 seconds.
2021-01-30 07:54:46,374 step: 1300, loss: 0.0000318, elapsed time: 10.62237 seconds.
2021-01-30 07:54:56,874 step: 1400, loss: 0.0000299, elapsed time: 10.50061 seconds.
2021-01-30 07:55:07,540 step: 1500, loss: 0.0000308, elapsed time: 10.66558 seconds.
2021-01-30 07:55:18,125 step: 1600, loss: 0.0000317, elapsed time: 10.58490 seconds.
2021-01-30 07:55:28,575 step: 1700, loss: 0.0000298, elapsed time: 10.45029 seconds.
2021-01-30 07:55:39,030 step: 1800, loss: 0.0000329, elapsed time: 10.45500 seconds.
2021-01-30 07:55:49,386 step: 1900, loss: 0.0000326, elapsed time: 10.35561 seconds.
2021-01-30 07:55:59,627 step: 2000, loss: 0.0000334, elapsed time: 10.24157 seconds.
2021-01-30 07:56:09,869 step: 2100, loss: 0.0000335, elapsed time: 10.24189 seconds.
2021-01-30 07:56:20,113 step: 2200, loss: 0.0000348, elapsed time: 10.24432 seconds.
2021-01-30 07:56:24,112 Train End. Elapsed Time: 238.765 seconds.
###Markdown
In this configuration, `tf.data.Dataset` produces training data slowly, which makes the whole training process slow. Therefore, the training elapsed time for `Original` and `Plugin` are similar. API signature All embedding_plugin APIs are defined in [hugectr_tf_ops_v2.py](../tools/embedding_plugin/python/hugectr_tf_ops_v2.py).Embedding_plugin takes `COO (Coordinate)` format as input format when `fprop` is used. In some cases, `fprop_experimental` can get better performance than `fprop`, but it is not stable. If `fprop_experimental` is used, input data format should be `CSR (Compressed Sparse Row)`. For more detail about how to convert your input data to `CSR` or `COO` format, please refer to [samples/format_processing.py](../tools/embedding_plugin/samples/format_processing.py). For more code samples, please refer to [samples/sample_with_fprop*.py](../tools/embedding_plugin/samples/sample_with_fprop.py).
###Code
%%html
<style>
table {float:left}
</style>
###Output
_____no_output_____
###Markdown
HugeCTR Embedding Plugin for TensorFlow This notebook introduces a TensorFlow (TF) plugin for the HugeCTR embedding layer, embedding_plugin, where users may benefit from both the computational efficiency of the HugeCTR embedding layer and the ease of use of TensorFlow (TF). What is new - Support `Localized` embedding.- No need to split DNN model into two sub-models, which means embedding layer can be put inside the scope of MirroredStrategy. Check Docker Container Please make sure that you have started the notebook inside the running NGC docker container: `nvcr.io/nvidia/hugectr:v3.0-plugin-embedding`. Several dynamic libraries have been installed to the system path `/usr/local/hugectr/lib/` that you'll have to load using TensorFlow. For convenient usage, you can directly import `hugectr_tf_ops_v2.py`, where we prepare the codes to load that dynamic library and wrap some operations, in your python script to be used with the embedding_plugin. Verify Accuracy To verify whether the embedding_plugin can obtain correct result, you can generate synthetic data for testing purposes as shown below.
###Code
# run this cell to clear all variables.
%reset -f
# import tensorflow and some modules
import tensorflow as tf
# do not let TF allocate all GPU memory
devices = tf.config.list_physical_devices("GPU")
for dev in devices:
tf.config.experimental.set_memory_growth(dev, True)
import numpy as np
# import hugectr_tf_ops.py to use embedding_plugin ops
import sys
sys.path.append("../tools/embedding_plugin/python/")
import hugectr_tf_ops_v2
# generate a random embedding table and show
vocabulary_size = 8
slot_num = 3
embedding_vector_size = 4
table = np.float32([i for i in range(1, vocabulary_size * embedding_vector_size + 1)]).reshape(vocabulary_size, embedding_vector_size)
print("init embedding table value:\n", table)
###Output
init embedding table value:
[[ 1. 2. 3. 4.]
[ 5. 6. 7. 8.]
[ 9. 10. 11. 12.]
[13. 14. 15. 16.]
[17. 18. 19. 20.]
[21. 22. 23. 24.]
[25. 26. 27. 28.]
[29. 30. 31. 32.]]
###Markdown
In HugeCTR, the corresponding dense shape of the input keys is `[batch_size, slot_num, max_nnz]`, and `0` is a valid key. Therefore, `-1` is used to denote invalid keys, which only occupy that position in the corresponding dense keys matrix.
###Code
# generate random keys to lookup from embedding table.
keys = np.array([[[0, -1], # nnz = 1
[1, -1], # nnz = 1
[2, 6]], # nnz = 2
[[0, -1], # nnz = 1
[1, -1], # nnz = 1
[-1, -1]], # nnz = 0
[[0, -1], # nnz = 1
[1, -1], # nnz = 1
[6, -1]], # nnz = 1
[[0, -1], # nnz = 1
[1, -1], # nnz = 1
[2, -1]]], # nnz = 1
dtype=np.int64)
print("the dense shape of inputs keys:", keys.shape)
# define a simple forward propagation and backward propagation with embedding_plugin
# NOTE: cause hugectr_tf_ops_v2.init() can only be called once,
# if you want to run this cell multi-times, please restart the kernel,
# or explicitly release embedding_plugin resources by calling hugectr_tf_ops_v2.reset()
# try release embedding plugin resources.
hugectr_tf_ops_v2.reset()
# hugectr_tf_ops embedding_plugin initialize
hugectr_tf_ops_v2.init(visible_gpus=[0], seed=0, key_type='int64', value_type='float', batch_size=4, batch_size_eval=4)
# create a distributed embedding_layer with embedding_plugin
dis_embedding_name = hugectr_tf_ops_v2.create_embedding(init_value=table, opt_hparams=[0.1, 0.9, 0.99, 1e-3],
name_='embedding_verification',
max_vocabulary_size_per_gpu=vocabulary_size,
slot_num=slot_num, embedding_vec_size=embedding_vector_size,
embedding_type='distributed', max_nnz=2)
# create a localized embedding_layer with embedding_plugin
loc_embedding_name = hugectr_tf_ops_v2.create_embedding(init_value=table, opt_hparams=[0.1, 0.9, 0.99, 1e-3],
name_='embedding_verification',
max_vocabulary_size_per_gpu=vocabulary_size,
slot_num=slot_num, embedding_vec_size=embedding_vector_size,
embedding_type='localized', max_nnz=2, update_type='Global')
# convert dense input keys to COO format
reshape_keys = tf.reshape(keys, [-1, keys.shape[-1]])
indices = tf.where(reshape_keys != -1)
values = tf.gather_nd(reshape_keys, indices)
row_indices = tf.transpose(indices, perm=[1, 0])
# create a Variable used for backward propagation
bp_trigger = tf.Variable(initial_value=1.0, trainable=True, dtype=tf.float32)
with tf.GradientTape(persistent=True) as tape:
tape.watch(bp_trigger)
# get distributed embedding forward result
dis_each_replicas = hugectr_tf_ops_v2.broadcast_then_convert_to_csr(dis_embedding_name, row_indices, values,
T = [tf.int32] * 1)
dis_forward_result = hugectr_tf_ops_v2.fprop(dis_embedding_name, 0, dis_each_replicas, bp_trigger, is_training=True)
print("Distributed Embedding first forward_result:\n", dis_forward_result, '\n')
# get localized embedding forward result
loc_each_replicas = hugectr_tf_ops_v2.broadcast_then_convert_to_csr(loc_embedding_name, row_indices, values,
T = [tf.int32] * 1)
loc_forward_result = hugectr_tf_ops_v2.fprop(loc_embedding_name, 0, loc_each_replicas, bp_trigger, is_training=True)
print("Localized Embedding first forward_result:\n", loc_forward_result, '\n')
# compute gradients & update params
dis_grads = tape.gradient(dis_forward_result, bp_trigger)
loc_grads = tape.gradient(loc_forward_result, bp_trigger)
# do second forward propagation to check whether embedding table is updated.
dis_forward_result_2 = hugectr_tf_ops_v2.fprop(dis_embedding_name, 0, dis_each_replicas, bp_trigger, is_training=True)
loc_forward_result_2 = hugectr_tf_ops_v2.fprop(loc_embedding_name, 0, loc_each_replicas, bp_trigger, is_training=True)
print("-"*100)
print("Distributed Embedding second forward_result:\n", dis_forward_result_2, '\n')
print("Localized Embedding second forward_result:\n", loc_forward_result_2, '\n')
# explicitly release embedding plugin resources
hugectr_tf_ops_v2.reset()
# similarly, use original tensorflow op to compare whether results are consistent.
# define a tf embedding layer
class EmbeddingLayer(tf.keras.layers.Layer):
def __init__(self, vocabulary_size, embedding_vec_size,
init_value):
super(EmbeddingLayer, self).__init__()
self.vocabulary_size = vocabulary_size
self.embedding_vec_size = embedding_vec_size
self.init_value = init_value
def build(self, _):
self.Var = self.add_weight(shape=(self.vocabulary_size, self.embedding_vec_size),
initializer=tf.constant_initializer(value=self.init_value))
def call(self, inputs):
return tf.nn.embedding_lookup_sparse(self.Var, inputs, sp_weights=None, combiner="sum")
with tf.GradientTape() as tape:
# reshape keys into [batch_size * slot_num, max_nnz]
reshape_keys = np.reshape(keys, newshape=(-1, keys.shape[-1]))
indices = tf.where(reshape_keys != -1)
values = tf.gather_nd(reshape_keys, indices)
# define a layer
tf_layer = EmbeddingLayer(vocabulary_size, embedding_vector_size, table)
# wrap input keys components into a SparseTensor
sparse_tensor = tf.sparse.SparseTensor(indices, values, reshape_keys.shape)
tf_forward = tf_layer(sparse_tensor)
print("tf forward_result:\n", tf.reshape(tf_forward, [keys.shape[0], keys.shape[1], tf_forward.shape[-1]]))
# define an optimizer
optimizer = tf.keras.optimizers.Adam(learning_rate=0.1, beta_1=0.9, beta_2=0.99, epsilon=1e-3)
# compute gradients & update params
grads = tape.gradient(tf_forward, tf_layer.trainable_weights)
optimizer.apply_gradients(zip(grads, tf_layer.trainable_weights))
# do second forward propagation to check whether params are updated.
tf_forward_2 = tf_layer(sparse_tensor)
print("\n")
print("tf second forward_result:\n", tf.reshape(tf_forward_2, [keys.shape[0], keys.shape[1], tf_forward_2.shape[-1]]))
# assert whether embedding_plugin's results are consistent with tensorflow original ops
# verify first forward results consistency
dis_first_forward_consistent = np.allclose(dis_forward_result.numpy(),
tf.reshape(tf_forward, [keys.shape[0], keys.shape[1], tf_forward.shape[-1]]).numpy())
loc_first_forward_consistent = np.allclose(loc_forward_result.numpy(),
tf.reshape(tf_forward, [keys.shape[0], keys.shape[1], tf_forward.shape[-1]]).numpy())
print("Consistent in first forward propagation for both Distributed & Localized Embedding?",
(dis_first_forward_consistent and loc_first_forward_consistent))
# verify second forward results consistency
dis_second_forward_consistent = np.allclose(dis_forward_result_2.numpy(),
tf.reshape(tf_forward_2, [keys.shape[0], keys.shape[1], tf_forward_2.shape[-1]]))
loc_second_forward_consistent = np.allclose(loc_forward_result_2.numpy(),
tf.reshape(tf_forward_2, [keys.shape[0], keys.shape[1], tf_forward_2.shape[-1]]))
print("Consistent in second forward propagation for both Distributed & Localized Embedding?",
(dis_second_forward_consistent and loc_second_forward_consistent))
###Output
Consistent in first forward propagation for both Distributed & Localized Embedding? True
Consistent in second forward propagation for both Distributed & Localized Embedding? True
###Markdown
The results from embedding_plugins and original TF ops are consistent in both first and second forward propagation for both `Distributed Embedding` and `Localized Embedding`, which means the embedding_plugin can get the same forward result and perform the same backward propagation as TF ops. Therefore, the embedding_plugin can obtain correct results. DeepFM demo In this notebook, TF 2.x is used to build the DeepFM model. Define Models with the Embedding_Plugin
###Code
# first, import tensorflow and import plugin ops from hugectr_tf_ops_v2.py
import tensorflow as tf
# do not let TF allocate all GPU memory
devices = tf.config.list_physical_devices("GPU")
for dev in devices:
tf.config.experimental.set_memory_growth(dev, True)
import sys
sys.path.append("../tools/embedding_plugin/python/")
import hugectr_tf_ops_v2
# define TF layers
class Multiply(tf.keras.layers.Layer):
def __init__(self, out_units):
super(Multiply, self).__init__()
self.out_units = out_units
def build(self, input_shape):
self.w = self.add_weight(name='weight_vector', shape=(input_shape[1], self.out_units),
initializer='glorot_uniform', trainable=True)
def call(self, inputs):
return inputs * self.w
# build DeepFM with plugin ops
class DeepFM_PluginEmbedding(tf.keras.models.Model):
def __init__(self,
vocabulary_size,
embedding_vec_size,
dropout_rate, # list of float
deep_layers, # list of int
initializer,
gpus,
batch_size,
batch_size_eval,
embedding_type = 'localized',
slot_num=1,
seed=123):
super(DeepFM_PluginEmbedding, self).__init__()
tf.keras.backend.clear_session()
tf.compat.v1.set_random_seed(seed)
self.vocabulary_size = vocabulary_size
self.embedding_vec_size = embedding_vec_size
self.dropout_rate = dropout_rate
self.deep_layers = deep_layers
self.gpus = gpus
self.batch_size = batch_size
self.batch_size_eval = batch_size_eval
self.slot_num = slot_num
self.embedding_type = embedding_type
if isinstance(initializer, str):
initializer = False
# when building model with embedding_plugin ops, init() should be called prior to any other ops.
hugectr_tf_ops_v2.init(visible_gpus=gpus, seed=seed, key_type='int64', value_type='float',
batch_size=batch_size, batch_size_eval=batch_size_eval)
# create a embedding_plugin layer
self.embedding_name = hugectr_tf_ops_v2.create_embedding(init_value=initializer, name_='hugectr_embedding',
embedding_type=embedding_type, optimizer_type='Adam',
max_vocabulary_size_per_gpu=(self.vocabulary_size // len(self.gpus)) + 1,
opt_hparams=[0.1, 0.9, 0.99, 1e-5], update_type='Local',
atomic_update=True, scaler=1.0, slot_num=self.slot_num,
max_nnz=1, max_feature_num=1*self.slot_num,
embedding_vec_size=self.embedding_vec_size + 1, combiner='sum')
# other layers with TF original ops
self.deep_dense = []
for i, deep_units in enumerate(self.deep_layers):
self.deep_dense.append(tf.keras.layers.Dense(units=deep_units, activation=None, use_bias=True,
kernel_initializer='glorot_normal',
bias_initializer='glorot_normal'))
self.deep_dense.append(tf.keras.layers.Dropout(dropout_rate[i]))
self.deep_dense.append(tf.keras.layers.Dense(units=1, activation=None, use_bias=True,
kernel_initializer='glorot_normal',
bias_initializer=tf.constant_initializer(0.01)))
self.add_layer = tf.keras.layers.Add()
self.y_act = tf.keras.layers.Activation(activation='sigmoid')
self.dense_multi = Multiply(1)
self.dense_embedding = Multiply(self.embedding_vec_size)
self.concat_1 = tf.keras.layers.Concatenate()
self.concat_2 = tf.keras.layers.Concatenate()
def build(self, _):
self.bp_trigger = self.add_weight(name='bp_trigger', shape=(1,), dtype=tf.float32, trainable=True)
@tf.function
def call(self, dense_feature, each_replica, training=True):
"""
forward propagation.
#arguments:
dense_feature: [batch_size, dense_dim]
"""
with tf.name_scope("embedding_and_slice"):
dense_0 = tf.cast(tf.expand_dims(dense_feature, 2), dtype=tf.float32) # [batchsize, dense_dim, 1]
dense_mul = self.dense_multi(dense_0) # [batchsize, dense_dim, 1]
dense_emb = self.dense_embedding(dense_0) # [batchsize, dense_dim, embedding_vec_size]
dense_mul = tf.reshape(dense_mul, [dense_mul.shape[0], -1]) # [batchsize, dense_dim * 1]
dense_emb = tf.reshape(dense_emb, [dense_emb.shape[0], -1]) # [batchsize, dense_dim * embedding_vec_size]
sparse = hugectr_tf_ops_v2.fprop(self.embedding_name, 0, #replica_ctx.replica_id_in_sync_group,
each_replica, self.bp_trigger, is_training=training) # [batch_size, self.slot_num, self.embedding_vec_size + 1]
sparse_1 = tf.slice(sparse, [0, 0, self.embedding_vec_size], [-1, self.slot_num, 1]) #[batchsize, slot_num, 1]
sparse_1 = tf.squeeze(sparse_1, 2) # [batchsize, slot_num]
sparse_emb = tf.slice(sparse, [0, 0, 0], [-1, self.slot_num, self.embedding_vec_size]) #[batchsize, slot_num, embedding_vec_size]
sparse_emb = tf.reshape(sparse_emb, [-1, self.slot_num * self.embedding_vec_size]) #[batchsize, slot_num * embedding_vec_size]
with tf.name_scope("FM"):
with tf.name_scope("first_order"):
first = self.concat_1([dense_mul, sparse_1]) # [batchsize, dense_dim + slot_num]
first_out = tf.reduce_sum(first, axis=-1, keepdims=True) # [batchsize, 1]
with tf.name_scope("second_order"):
hidden = self.concat_2([dense_emb, sparse_emb]) # [batchsize, (dense_dim + slot_num) * embedding_vec_size]
second = tf.reshape(hidden, [-1, dense_feature.shape[1] + self.slot_num, self.embedding_vec_size])
square_sum = tf.math.square(tf.math.reduce_sum(second, axis=1, keepdims=True)) # [batchsize, 1, embedding_vec_size]
sum_square = tf.math.reduce_sum(tf.math.square(second), axis=1, keepdims=True) # [batchsize, 1, embedding_vec_size]
second_out = 0.5 * (sum_square - square_sum) # [batchsize, 1, embedding_vec_size]
second_out = tf.math.reduce_sum(second_out, axis=-1, keepdims=False) # [batchsize, 1]
with tf.name_scope("Deep"):
for i, layer in enumerate(self.deep_dense):
if i % 2 == 0: # dense
hidden = layer(hidden)
else: # dropout
hidden = layer(hidden, training)
y = self.add_layer([hidden, first_out, second_out])
y = self.y_act(y) # [batchsize, 1]
return y
@property
def get_embedding_name(self):
return self.embedding_name
###Output
_____no_output_____
###Markdown
The above cells use embedding plugin ops and TF layers to define a TF DeepFM model. Similarly, define an embedding layer with TF original ops, and define a DeepFM model with that layer. Because embedding_plugin supports model parallelism, the parameters of the original TF embedding layer are equally distributed to each GPU for a fair performance comparison. Define Models with the Original TF Ops
###Code
# define a TF embedding layer with TF original ops
class OriginalEmbedding(tf.keras.layers.Layer):
def __init__(self,
vocabulary_size,
embedding_vec_size,
initializer='uniform',
combiner="sum",
gpus=[0]):
super(OriginalEmbedding, self).__init__()
self.vocabulary_size = vocabulary_size
self.embedding_vec_size = embedding_vec_size
if isinstance(initializer, str):
self.initializer = tf.keras.initializers.get(initializer)
else:
self.initializer = initializer
if combiner not in ["sum", "mean"]:
raise RuntimeError("combiner must be one of \{'sum', 'mean'\}.")
self.combiner = combiner
if (not isinstance(gpus, list)) and (not isinstance(gpus, tuple)):
raise RuntimeError("gpus must be a list or tuple.")
self.gpus = gpus
def build(self, _):
if isinstance(self.initializer, tf.keras.initializers.Initializer):
if len(self.gpus) > 1:
self.embeddings_params = list()
mod_size = self.vocabulary_size % len(self.gpus)
vocabulary_size_each_gpu = [(self.vocabulary_size // len(self.gpus)) + (1 if dev_id < mod_size else 0)
for dev_id in range(len(self.gpus))]
for i, gpu in enumerate(self.gpus):
with tf.device("/gpu:%d" %gpu):
params_i = self.add_weight(name="embedding_" + str(gpu),
shape=(vocabulary_size_each_gpu[i], self.embedding_vec_size),
initializer=self.initializer)
self.embeddings_params.append(params_i)
else:
self.embeddings_params = self.add_weight(name='embeddings',
shape=(self.vocabulary_size, self.embedding_vec_size),
initializer=self.initializer)
else:
self.embeddings_params = self.initializer
@tf.function
def call(self, keys, output_shape):
result = tf.nn.embedding_lookup_sparse(self.embeddings_params, keys,
sp_weights=None, combiner=self.combiner)
return tf.reshape(result, output_shape)
# define DeepFM model with original TF embedding layer
class DeepFM_OriginalEmbedding(tf.keras.models.Model):
def __init__(self,
vocabulary_size,
embedding_vec_size,
dropout_rate, # list of float
deep_layers, # list of int
initializer,
gpus,
batch_size,
batch_size_eval,
embedding_type = 'localized',
slot_num=1,
seed=123):
super(DeepFM_OriginalEmbedding, self).__init__()
tf.keras.backend.clear_session()
tf.compat.v1.set_random_seed(seed)
self.vocabulary_size = vocabulary_size
self.embedding_vec_size = embedding_vec_size
self.dropout_rate = dropout_rate
self.deep_layers = deep_layers
self.gpus = gpus
self.batch_size = batch_size
self.batch_size_eval = batch_size_eval
self.slot_num = slot_num
self.embedding_type = embedding_type
self.original_embedding_layer = OriginalEmbedding(vocabulary_size=vocabulary_size,
embedding_vec_size=embedding_vec_size + 1,
initializer=initializer, gpus=gpus)
self.deep_dense = []
for i, deep_units in enumerate(self.deep_layers):
self.deep_dense.append(tf.keras.layers.Dense(units=deep_units, activation=None, use_bias=True,
kernel_initializer='glorot_normal',
bias_initializer='glorot_normal'))
self.deep_dense.append(tf.keras.layers.Dropout(dropout_rate[i]))
self.deep_dense.append(tf.keras.layers.Dense(units=1, activation=None, use_bias=True,
kernel_initializer='glorot_normal',
bias_initializer=tf.constant_initializer(0.01)))
self.add_layer = tf.keras.layers.Add()
self.y_act = tf.keras.layers.Activation(activation='sigmoid')
self.dense_multi = Multiply(1)
self.dense_embedding = Multiply(self.embedding_vec_size)
self.concat_1 = tf.keras.layers.Concatenate()
self.concat_2 = tf.keras.layers.Concatenate()
@tf.function
def call(self, dense_feature, sparse_feature, training=True):
"""
forward propagation.
#arguments:
dense_feature: [batch_size, dense_dim]
sparse_feature: for OriginalEmbedding, it is a SparseTensor, and the dense shape is [batch_size * slot_num, max_nnz];
for PluginEmbedding, it is a list of [row_offsets, value_tensors, nnz_array].
"""
with tf.name_scope("embedding_and_slice"):
dense_0 = tf.cast(tf.expand_dims(dense_feature, 2), dtype=tf.float32) # [batchsize, dense_dim, 1]
dense_mul = self.dense_multi(dense_0) # [batchsize, dense_dim, 1]
dense_emb = self.dense_embedding(dense_0) # [batchsize, dense_dim, embedding_vec_size]
dense_mul = tf.reshape(dense_mul, [dense_mul.shape[0], -1]) # [batchsize, dense_dim * 1]
dense_emb = tf.reshape(dense_emb, [dense_emb.shape[0], -1]) # [batchsize, dense_dim * embedding_vec_size]
sparse = self.original_embedding_layer(sparse_feature, output_shape=[-1, self.slot_num, self.embedding_vec_size + 1])
sparse_1 = tf.slice(sparse, [0, 0, self.embedding_vec_size], [-1, self.slot_num, 1]) #[batchsize, slot_num, 1]
sparse_1 = tf.squeeze(sparse_1, 2) # [batchsize, slot_num]
sparse_emb = tf.slice(sparse, [0, 0, 0], [-1, self.slot_num, self.embedding_vec_size]) #[batchsize, slot_num, embedding_vec_size]
sparse_emb = tf.reshape(sparse_emb, [-1, self.slot_num * self.embedding_vec_size]) #[batchsize, slot_num * embedding_vec_size]
with tf.name_scope("FM"):
with tf.name_scope("first_order"):
first = self.concat_1([dense_mul, sparse_1]) # [batchsize, dense_dim + slot_num]
first_out = tf.reduce_sum(first, axis=-1, keepdims=True) # [batchsize, 1]
with tf.name_scope("second_order"):
hidden = self.concat_2([dense_emb, sparse_emb]) # [batchsize, (dense_dim + slot_num) * embedding_vec_size]
second = tf.reshape(hidden, [-1, dense_feature.shape[1] + self.slot_num, self.embedding_vec_size])
square_sum = tf.math.square(tf.math.reduce_sum(second, axis=1, keepdims=True)) # [batchsize, 1, embedding_vec_size]
sum_square = tf.math.reduce_sum(tf.math.square(second), axis=1, keepdims=True) # [batchsize, 1, embedding_vec_size]
second_out = 0.5 * (sum_square - square_sum) # [batchsize, 1, embedding_vec_size]
second_out = tf.math.reduce_sum(second_out, axis=-1, keepdims=False) # [batchsize, 1]
with tf.name_scope("Deep"):
for i, layer in enumerate(self.deep_dense):
if i % 2 == 0: # dense
hidden = layer(hidden)
else: # dropout
hidden = layer(hidden, training)
y = self.add_layer([hidden, first_out, second_out])
y = self.y_act(y) # [batchsize, 1]
return y
###Output
_____no_output_____
###Markdown
Dataset is needed to use these models for training. [Kaggle Criteo datasets](http://labs.criteo.com/2014/02/kaggle-display-advertising-challenge-dataset/) provided by CriteoLabs is used as the training dataset. The original training set contains 45,840,617 examples. Each example contains a label (0 by default or 1 if the ad was clicked) and 39 features in which 13 of them are integer and the other 26 are categorial. Since TFRecord is suitable for the training process and the Criteo dataset is missing numerous values across the feature columns, preprocessing is needed. The original test set won't be used because it doesn't contain labels. Dataset processing 1. Download dataset from [https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/](http://azuremlsampleexperiments.blob.core.windows.net/criteo/day_1.gz).2. Extract the dataset by running the following command. ```shell $ gunzip day_1.gz ``` 3. The whole dataset is too large, so get a subset with ```shell $ head -n 45840617 day_1 > train.txt ```4. Preprocess the datast and set missing values.Preprocessing functions are defined in [preprocess.py](../tools/embedding_plugin/performance_profile/preprocess.py). Open that file and check the codes.
###Code
# specify source csv name and output csv name, run this command will do the preprocessing.
# Warning: this command will take serveral hours to do preprocessing.
%run ../tools/embedding_plugin/performance_profile/preprocess.py \
--src_csv_path=../tools/embedding_plugin/train.txt \
--dst_csv_path=../tools/embedding_plugin/train.out.txt \
--normalize_dense=0 --feature_cross=0
###Output
_____no_output_____
###Markdown
5. Split the dataset by running the following commands:```shell$ head -n 36672493 train.out.txt > train$ tail -n 9168124 train.out.txt > valtest$ head -n 4584062 valtest > val$ tail -n 4584062 valtest > test``` 6. Convert the dataset to a TFRecord file. Converting functions are defined in [txt2tfrecord.py](../tools/embedding_plugin/performance_profile/txt2tfrecord.py). Open that file and check the codes.After the data preprocessing is completed, *.tfrecord file(s) will be generated, which can be used for training. The training loop can now be configured to use the dataset and models to perform the training.
###Code
# specify source name and output tfrecord name, run this command will do the converting.
# Warning: this command will take half an hour to do converting.
%run ../tools/embedding_plugin/performance_profile/txt2tfrecord.py \
--src_txt_name=train \
--dst_tfrecord_name=train.tfrecord \
--normalized=0 --use_multi_process=1 \
--shard_num=1
# if multi tfrecord files are wanted, set shard_num to the number of files.
###Output
_____no_output_____
###Markdown
Define training loop and do training In [read_data.py](../tools/embedding_plugin/performance_profile/read_data.py), some preprocessing and TF data reading pipeline creation functions are defined.
###Code
# set env path, so that some modules can be imported
sys.path.append("../tools/embedding_plugin/performance_profile/")
import txt2tfrecord as utils
from read_data import CreateDataset
import time
import logging
logging.basicConfig(format='%(asctime)s %(message)s')
logging.root.setLevel('INFO')
# choose wich model for training
which_model = "Plugin" # change it to "Original", if you want to try the model define with original tf ops.
# set some hyper parameters for training process
if ("Plugin" == which_model):
batch_size = 16384
n_epochs = 1
distribute_keys = 1
gpus = [0] # use GPU0
embedding_type = 'distributed'
vocabulary_size = 1737710
embedding_vec_size = 10
slot_num = 26
batch_size_eval = 1 * len(gpus)
elif ("Original" == which_model):
batch_size = 16384
n_epochs = 1
distribute_keys = 0
gpus = [0] # use GPU0
vocabulary_size = 1737710
embedding_vec_size = 10
slot_num = 26
batch_size_eval = 1 * len(gpus)
embedding_type = 'distributed'
# define feature_description to read tfrecord examples.
cols = [utils.idx2key(idx, False) for idx in range(0, utils.NUM_TOTAL_COLUMNS)]
feature_desc = dict()
for col in cols:
if col == 'label' or col.startswith("I"):
feature_desc[col] = tf.io.FixedLenFeature([], tf.int64) # scaler
else:
feature_desc[col] = tf.io.FixedLenFeature([1], tf.int64) # [slot_num, nnz]
# please set data_path to your tfrecord
data_path = "../tools/embedding_plugin/performance_profile/"
# create tfrecord reading pipeling
dataset_names = [data_path + "./train.tfrecord"]
dataset = CreateDataset(dataset_names=dataset_names,
feature_desc=feature_desc,
batch_size=batch_size,
n_epochs=n_epochs,
slot_num=slot_num,
max_nnz=1,
convert_to_csr=False,
gpu_count=len(gpus),
embedding_type=embedding_type,
get_row_indices=True)()
# define loss function and optimizer used in other TF layers.
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
loss_fn = tf.keras.losses.BinaryCrossentropy(from_logits=False)
# create model instance
if "Original" == which_model:
model = DeepFM_OriginalEmbedding(vocabulary_size=vocabulary_size, embedding_vec_size=embedding_vec_size,
embedding_type=embedding_type,
dropout_rate=[0.5] * 10, deep_layers=[1024] * 10,
initializer='uniform', gpus=gpus, batch_size=batch_size,
batch_size_eval=batch_size_eval,
slot_num=slot_num)
elif "Plugin" == which_model:
hugectr_tf_ops_v2.reset()
model = DeepFM_PluginEmbedding(vocabulary_size=vocabulary_size, embedding_vec_size=embedding_vec_size,
embedding_type=embedding_type,
dropout_rate=[0.5] * 10, deep_layers=[1024] * 10,
initializer='uniform', gpus=gpus, batch_size=batch_size,
batch_size_eval=batch_size_eval,
slot_num=slot_num)
# define training step
@tf.function
def _train_step(dense_batch, sparse_batch, y_batch, model, loss_fn, optimizer):
with tf.GradientTape() as tape:
y_batch = tf.cast(y_batch, dtype=tf.float32)
logits = model(dense_batch, sparse_batch, training=True)
loss = loss_fn(y_batch, logits)
loss /= dense_batch.shape[0]
grads = tape.gradient(loss, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
return loss
# training loop
logging.info("begin to train")
begin_time = time.time()
display_begin = begin_time
for step, datas in enumerate(dataset):
label, dense, others = datas[0], datas[1], datas[2:]
if "Original" == which_model:
sparse = others[-1]
elif "Plugin" == which_model:
sparse = others[0:2]
sparse = hugectr_tf_ops_v2.broadcast_then_convert_to_csr(model.get_embedding_name,
row_indices=sparse[0], values=sparse[1],
T=[tf.int32]*len(gpus))
train_loss = _train_step(dense, sparse, label, model, loss_fn, optimizer)
loss_value = train_loss.numpy()
if (step % 100 == 0 and step != 0):
display_end = time.time()
logging.info("step: %d, loss: %.7f, elapsed time: %.5f seconds." %(step, loss_value, (display_end - display_begin)))
display_begin = display_end
end_time = time.time()
logging.info("Train End. Elapsed Time: %.3f seconds." %(end_time - begin_time))
###Output
2021-01-30 07:52:25,346 begin to train
2021-01-30 07:52:37,596 step: 100, loss: 0.0000278, elapsed time: 12.24864 seconds.
2021-01-30 07:52:48,122 step: 200, loss: 0.0000301, elapsed time: 10.52632 seconds.
2021-01-30 07:52:59,111 step: 300, loss: 0.0000292, elapsed time: 10.98891 seconds.
2021-01-30 07:53:10,397 step: 400, loss: 0.0000298, elapsed time: 11.28664 seconds.
2021-01-30 07:53:21,045 step: 500, loss: 0.0000308, elapsed time: 10.64784 seconds.
2021-01-30 07:53:31,526 step: 600, loss: 0.0000298, elapsed time: 10.48030 seconds.
2021-01-30 07:53:41,712 step: 700, loss: 0.0000298, elapsed time: 10.18635 seconds.
2021-01-30 07:53:52,467 step: 800, loss: 0.0000304, elapsed time: 10.75503 seconds.
2021-01-30 07:54:03,011 step: 900, loss: 0.0000299, elapsed time: 10.54400 seconds.
2021-01-30 07:54:14,301 step: 1000, loss: 0.0000307, elapsed time: 11.28991 seconds.
2021-01-30 07:54:25,194 step: 1100, loss: 0.0000286, elapsed time: 10.89364 seconds.
2021-01-30 07:54:35,751 step: 1200, loss: 0.0000310, elapsed time: 10.55683 seconds.
2021-01-30 07:54:46,374 step: 1300, loss: 0.0000318, elapsed time: 10.62237 seconds.
2021-01-30 07:54:56,874 step: 1400, loss: 0.0000299, elapsed time: 10.50061 seconds.
2021-01-30 07:55:07,540 step: 1500, loss: 0.0000308, elapsed time: 10.66558 seconds.
2021-01-30 07:55:18,125 step: 1600, loss: 0.0000317, elapsed time: 10.58490 seconds.
2021-01-30 07:55:28,575 step: 1700, loss: 0.0000298, elapsed time: 10.45029 seconds.
2021-01-30 07:55:39,030 step: 1800, loss: 0.0000329, elapsed time: 10.45500 seconds.
2021-01-30 07:55:49,386 step: 1900, loss: 0.0000326, elapsed time: 10.35561 seconds.
2021-01-30 07:55:59,627 step: 2000, loss: 0.0000334, elapsed time: 10.24157 seconds.
2021-01-30 07:56:09,869 step: 2100, loss: 0.0000335, elapsed time: 10.24189 seconds.
2021-01-30 07:56:20,113 step: 2200, loss: 0.0000348, elapsed time: 10.24432 seconds.
2021-01-30 07:56:24,112 Train End. Elapsed Time: 238.765 seconds.
###Markdown
In this configuration, `tf.data.Dataset` produces training data slowly, which makes the whole training process slow. Therefore, the training elapsed time for `Original` and `Plugin` are similar. API signature All embedding_plugin APIs are defined in [hugectr_tf_ops_v2.py](../tools/embedding_plugin/python/hugectr_tf_ops_v2.py).Embedding_plugin takes `COO (Coordinate)` format as input format when `fprop` is used. In some cases, `fprop_experimental` can get better performance than `fprop`, but it is not stable. If `fprop_experimental` is used, input data format should be `CSR (Compressed Sparse Row)`. For more detail about how to convert your input data to `CSR` or `COO` format, please refer to [samples/format_processing.py](../tools/embedding_plugin/samples/format_processing.py). For more code samples, please refer to [samples/sample_with_fprop*.py](../tools/embedding_plugin/samples/sample_with_fprop.py).
###Code
%%html
<style>
table {float:left}
</style>
###Output
_____no_output_____ |
machine_translation_project.ipynb | ###Markdown
Artificial Intelligence Nanodegree Machine Translation ProjectIn this notebook, sections that end with **'(IMPLEMENTATION)'** in the header indicate that the following blocks of code will require additional functionality which you must provide. Please be sure to read the instructions carefully! IntroductionIn this notebook, you will build a deep neural network that functions as part of an end-to-end machine translation pipeline. Your completed pipeline will accept English text as input and return the French translation.- **Preprocess** - You'll convert text to sequence of integers.- **Models** Create models which accepts a sequence of integers as input and returns a probability distribution over possible translations. After learning about the basic types of neural networks that are often used for machine translation, you will engage in your own investigations, to design your own model!- **Prediction** Run the model on English text.
###Code
%load_ext autoreload
%aimport helper, tests
%autoreload 1
import collections
import helper
import numpy as np
import project_tests as tests
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Model
from keras.layers import GRU, Input, Dense, TimeDistributed, Activation, RepeatVector, Bidirectional
from keras.layers.embeddings import Embedding
from keras.optimizers import Adam
from keras.losses import sparse_categorical_crossentropy
###Output
Using TensorFlow backend.
###Markdown
Verify access to the GPUThe following test applies only if you expect to be using a GPU, e.g., while running in a Udacity Workspace or using an AWS instance with GPU support. Run the next cell, and verify that the device_type is "GPU".- If the device is not GPU & you are running from a Udacity Workspace, then save your workspace with the icon at the top, then click "enable" at the bottom of the workspace.- If the device is not GPU & you are running from an AWS instance, then refer to the cloud computing instructions in the classroom to verify your setup steps.
###Code
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
###Output
[name: "/cpu:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 10161393366285996921
, name: "/gpu:0"
device_type: "GPU"
memory_limit: 357433344
locality {
bus_id: 1
}
incarnation: 8337774331018225163
physical_device_desc: "device: 0, name: Tesla K80, pci bus id: 0000:00:04.0"
]
###Markdown
DatasetWe begin by investigating the dataset that will be used to train and evaluate your pipeline. The most common datasets used for machine translation are from [WMT](http://www.statmt.org/). However, that will take a long time to train a neural network on. We'll be using a dataset we created for this project that contains a small vocabulary. You'll be able to train your model in a reasonable time with this dataset. Load DataThe data is located in `data/small_vocab_en` and `data/small_vocab_fr`. The `small_vocab_en` file contains English sentences with their French translations in the `small_vocab_fr` file. Load the English and French data from these files from running the cell below.
###Code
# Load English data
english_sentences = helper.load_data('data/small_vocab_en')
# Load French data
french_sentences = helper.load_data('data/small_vocab_fr')
print('Dataset Loaded')
###Output
Dataset Loaded
###Markdown
FilesEach line in `small_vocab_en` contains an English sentence with the respective translation in each line of `small_vocab_fr`. View the first two lines from each file.
###Code
for sample_i in range(2):
print('small_vocab_en Line {}: {}'.format(sample_i + 1, english_sentences[sample_i]))
print('small_vocab_fr Line {}: {}'.format(sample_i + 1, french_sentences[sample_i]))
###Output
small_vocab_en Line 1: new jersey is sometimes quiet during autumn , and it is snowy in april .
small_vocab_fr Line 1: new jersey est parfois calme pendant l' automne , et il est neigeux en avril .
small_vocab_en Line 2: the united states is usually chilly during july , and it is usually freezing in november .
small_vocab_fr Line 2: les états-unis est généralement froid en juillet , et il gèle habituellement en novembre .
###Markdown
From looking at the sentences, you can see they have been preprocessed already. The puncuations have been delimited using spaces. All the text have been converted to lowercase. This should save you some time, but the text requires more preprocessing. VocabularyThe complexity of the problem is determined by the complexity of the vocabulary. A more complex vocabulary is a more complex problem. Let's look at the complexity of the dataset we'll be working with.
###Code
english_words_counter = collections.Counter([word for sentence in english_sentences for word in sentence.split()])
french_words_counter = collections.Counter([word for sentence in french_sentences for word in sentence.split()])
print('{} English words.'.format(len([word for sentence in english_sentences for word in sentence.split()])))
print('{} unique English words.'.format(len(english_words_counter)))
print('10 Most common words in the English dataset:')
print('"' + '" "'.join(list(zip(*english_words_counter.most_common(10)))[0]) + '"')
print()
print('{} French words.'.format(len([word for sentence in french_sentences for word in sentence.split()])))
print('{} unique French words.'.format(len(french_words_counter)))
print('10 Most common words in the French dataset:')
print('"' + '" "'.join(list(zip(*french_words_counter.most_common(10)))[0]) + '"')
###Output
1823250 English words.
227 unique English words.
10 Most common words in the English dataset:
"is" "," "." "in" "it" "during" "the" "but" "and" "sometimes"
1961295 French words.
355 unique French words.
10 Most common words in the French dataset:
"est" "." "," "en" "il" "les" "mais" "et" "la" "parfois"
###Markdown
For comparison, _Alice's Adventures in Wonderland_ contains 2,766 unique words of a total of 15,500 words. PreprocessFor this project, you won't use text data as input to your model. Instead, you'll convert the text into sequences of integers using the following preprocess methods:1. Tokenize the words into ids2. Add padding to make all the sequences the same length.Time to start preprocessing the data... Tokenize (IMPLEMENTATION)For a neural network to predict on text data, it first has to be turned into data it can understand. Text data like "dog" is a sequence of ASCII character encodings. Since a neural network is a series of multiplication and addition operations, the input data needs to be number(s).We can turn each character into a number or each word into a number. These are called character and word ids, respectively. Character ids are used for character level models that generate text predictions for each character. A word level model uses word ids that generate text predictions for each word. Word level models tend to learn better, since they are lower in complexity, so we'll use those.Turn each sentence into a sequence of words ids using Keras's [`Tokenizer`](https://keras.io/preprocessing/text/tokenizer) function. Use this function to tokenize `english_sentences` and `french_sentences` in the cell below.Running the cell will run `tokenize` on sample data and show output for debugging.
###Code
def tokenize(x):
"""
Tokenize x
:param x: List of sentences/strings to be tokenized
:return: Tuple of (tokenized x data, tokenizer used to tokenize x)
"""
# TODO: Implement
tokenizer_object = Tokenizer()
tokenizer_object.fit_on_texts(x)
text_seq = tokenizer_object.texts_to_sequences(x)
return text_seq, tokenizer_object
tests.test_tokenize(tokenize)
# Tokenize Example output
text_sentences = [
'The quick brown fox jumps over the lazy dog .',
'By Jove , my quick study of lexicography won a prize .',
'This is a short sentence .']
text_tokenized, text_tokenizer = tokenize(text_sentences)
print(text_tokenizer.word_index)
print()
for sample_i, (sent, token_sent) in enumerate(zip(text_sentences, text_tokenized)):
print('Sequence {} in x'.format(sample_i + 1))
print(' Input: {}'.format(sent))
print(' Output: {}'.format(token_sent))
###Output
{'the': 1, 'quick': 2, 'a': 3, 'brown': 4, 'fox': 5, 'jumps': 6, 'over': 7, 'lazy': 8, 'dog': 9, 'by': 10, 'jove': 11, 'my': 12, 'study': 13, 'of': 14, 'lexicography': 15, 'won': 16, 'prize': 17, 'this': 18, 'is': 19, 'short': 20, 'sentence': 21}
Sequence 1 in x
Input: The quick brown fox jumps over the lazy dog .
Output: [1, 2, 4, 5, 6, 7, 1, 8, 9]
Sequence 2 in x
Input: By Jove , my quick study of lexicography won a prize .
Output: [10, 11, 12, 2, 13, 14, 15, 16, 3, 17]
Sequence 3 in x
Input: This is a short sentence .
Output: [18, 19, 3, 20, 21]
###Markdown
Padding (IMPLEMENTATION)When batching the sequence of word ids together, each sequence needs to be the same length. Since sentences are dynamic in length, we can add padding to the end of the sequences to make them the same length.Make sure all the English sequences have the same length and all the French sequences have the same length by adding padding to the **end** of each sequence using Keras's [`pad_sequences`](https://keras.io/preprocessing/sequence/pad_sequences) function.
###Code
def pad(x, length=None):
"""
Pad x
:param x: List of sequences.
:param length: Length to pad the sequence to. If None, use length of longest sequence in x.
:return: Padded numpy array of sequences
"""
# TODO: Implement
if length == None:
length = max([len(i) for i in x])
padded_seq = pad_sequences(sequences=x, maxlen=length, padding='post', value=0)
return padded_seq
tests.test_pad(pad)
# Pad Tokenized output
test_pad = pad(text_tokenized)
for sample_i, (token_sent, pad_sent) in enumerate(zip(text_tokenized, test_pad)):
print('Sequence {} in x'.format(sample_i + 1))
print(' Input: {}'.format(np.array(token_sent)))
print(' Output: {}'.format(pad_sent))
###Output
Sequence 1 in x
Input: [1 2 4 5 6 7 1 8 9]
Output: [1 2 4 5 6 7 1 8 9 0]
Sequence 2 in x
Input: [10 11 12 2 13 14 15 16 3 17]
Output: [10 11 12 2 13 14 15 16 3 17]
Sequence 3 in x
Input: [18 19 3 20 21]
Output: [18 19 3 20 21 0 0 0 0 0]
###Markdown
Preprocess PipelineYour focus for this project is to build neural network architecture, so we won't ask you to create a preprocess pipeline. Instead, we've provided you with the implementation of the `preprocess` function.
###Code
def preprocess(x, y):
"""
Preprocess x and y
:param x: Feature List of sentences
:param y: Label List of sentences
:return: Tuple of (Preprocessed x, Preprocessed y, x tokenizer, y tokenizer)
"""
preprocess_x, x_tk = tokenize(x)
preprocess_y, y_tk = tokenize(y)
preprocess_x = pad(preprocess_x)
preprocess_y = pad(preprocess_y)
# Keras's sparse_categorical_crossentropy function requires the labels to be in 3 dimensions
preprocess_y = preprocess_y.reshape(*preprocess_y.shape, 1)
return preprocess_x, preprocess_y, x_tk, y_tk
preproc_english_sentences, preproc_french_sentences, english_tokenizer, french_tokenizer =\
preprocess(english_sentences, french_sentences)
max_english_sequence_length = preproc_english_sentences.shape[1]
max_french_sequence_length = preproc_french_sentences.shape[1]
english_vocab_size = len(english_tokenizer.word_index)
french_vocab_size = len(french_tokenizer.word_index)
print('Data Preprocessed')
print("Max English sentence length:", max_english_sequence_length)
print("Max French sentence length:", max_french_sequence_length)
print("English vocabulary size:", english_vocab_size)
print("French vocabulary size:", french_vocab_size)
###Output
Data Preprocessed
Max English sentence length: 15
Max French sentence length: 21
English vocabulary size: 199
French vocabulary size: 344
###Markdown
ModelsIn this section, you will experiment with various neural network architectures.You will begin by training four relatively simple architectures.- Model 1 is a simple RNN- Model 2 is a RNN with Embedding- Model 3 is a Bidirectional RNN- Model 4 is an optional Encoder-Decoder RNNAfter experimenting with the four simple architectures, you will construct a deeper architecture that is designed to outperform all four models. Ids Back to TextThe neural network will be translating the input to words ids, which isn't the final form we want. We want the French translation. The function `logits_to_text` will bridge the gab between the logits from the neural network to the French translation. You'll be using this function to better understand the output of the neural network.
###Code
def logits_to_text(logits, tokenizer):
"""
Turn logits from a neural network into text using the tokenizer
:param logits: Logits from a neural network
:param tokenizer: Keras Tokenizer fit on the labels
:return: String that represents the text of the logits
"""
index_to_words = {id: word for word, id in tokenizer.word_index.items()}
index_to_words[0] = '<PAD>'
return ' '.join([index_to_words[prediction] for prediction in np.argmax(logits, 1)])
print('`logits_to_text` function loaded.')
###Output
`logits_to_text` function loaded.
###Markdown
Model 1: RNN (IMPLEMENTATION)A basic RNN model is a good baseline for sequence data. In this model, you'll build a RNN that translates English to French.
###Code
def simple_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a basic RNN on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# Build a basic RNN Model with 1 hidden layer, 1 input and 1 output layer
learning_rate = 0.1
#Input seq
input_seq = Input(shape=input_shape[1:])
# Hidden Layer
hidden_layer = GRU(output_sequence_length, return_sequences=True)(input_seq)
# Output Layer
output_layer = TimeDistributed(Dense(french_vocab_size, activation='softmax'))(hidden_layer)
model = Model(inputs=input_seq, outputs=output_layer)
#Model Compilation
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
tests.test_simple_model(simple_model)
# Reshaping the input to work with a basic RNN
tmp_x = pad(preproc_english_sentences, max_french_sequence_length)
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2], 1))
# Train the neural network
simple_rnn_model = simple_model(
tmp_x.shape,
max_french_sequence_length,
english_vocab_size,
french_vocab_size)
simple_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
# Print prediction(s)
print(logits_to_text(simple_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))
###Output
Train on 110288 samples, validate on 27573 samples
Epoch 1/10
110288/110288 [==============================] - 6s 58us/step - loss: 2.3509 - acc: 0.4753 - val_loss: nan - val_acc: 0.5084
Epoch 2/10
110288/110288 [==============================] - 6s 53us/step - loss: 1.9339 - acc: 0.5207 - val_loss: nan - val_acc: 0.5358
Epoch 3/10
110288/110288 [==============================] - 6s 53us/step - loss: 1.8113 - acc: 0.5432 - val_loss: nan - val_acc: 0.5564
Epoch 4/10
110288/110288 [==============================] - 6s 53us/step - loss: 1.7461 - acc: 0.5553 - val_loss: nan - val_acc: 0.5626
Epoch 5/10
110288/110288 [==============================] - 6s 53us/step - loss: 1.7323 - acc: 0.5587 - val_loss: nan - val_acc: 0.5607
Epoch 6/10
110288/110288 [==============================] - 6s 53us/step - loss: 1.7132 - acc: 0.5621 - val_loss: nan - val_acc: 0.5538
Epoch 7/10
110288/110288 [==============================] - 6s 54us/step - loss: 1.7498 - acc: 0.5522 - val_loss: nan - val_acc: 0.5661
Epoch 8/10
110288/110288 [==============================] - 6s 53us/step - loss: 1.7403 - acc: 0.5539 - val_loss: nan - val_acc: 0.5473
Epoch 9/10
110288/110288 [==============================] - 6s 52us/step - loss: 1.7016 - acc: 0.5600 - val_loss: nan - val_acc: 0.5601
Epoch 10/10
110288/110288 [==============================] - 6s 52us/step - loss: 1.7521 - acc: 0.5549 - val_loss: nan - val_acc: 0.5529
new new est parfois est en en et il est est en en <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
###Markdown
Model 2: Embedding (IMPLEMENTATION)You've turned the words into ids, but there's a better representation of a word. This is called word embeddings. An embedding is a vector representation of the word that is close to similar words in n-dimensional space, where the n represents the size of the embedding vectors.In this model, you'll create a RNN model using embedding.
###Code
def embed_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a RNN model using word embedding on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# Build an embeded Model with 2 hidden layer, 1 input and 1 output layer
# Experimented with number of units in the GRU layer to improve the accuracy, also with the number of hidden layers
# The learning was also tuned to increase the accuracy
#Input seq
input_seq = Input(shape=input_shape[1:])
#Embedding layer
embedding_layer = Embedding(english_vocab_size, output_sequence_length)(input_seq)
# Hidden Layer 1
hidden_layer_1 = GRU(512, return_sequences=True)(embedding_layer)
# Hidden Layer 2
hidden_layer_2 = TimeDistributed(Dense(french_vocab_size*4, activation='relu'))(hidden_layer_1)
# Output Layer
output_layer = Dense(french_vocab_size, activation='softmax')(hidden_layer_2)
model = Model(inputs=input_seq, outputs=output_layer)
learning_rate = 0.01
#Model Compilation
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
tests.test_embed_model(embed_model)
# TODO: Reshape the input
temp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
temp_x = temp_x.reshape((-1,preproc_french_sentences.shape[-2]))
# TODO: Train the neural network
embed_model = embed_model(
temp_x.shape,
preproc_french_sentences.shape[1],
len(english_tokenizer.word_index) + 1,
len(french_tokenizer.word_index) + 1)
embed_model.fit(temp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
# TODO: Print prediction(s)
print(logits_to_text(embed_model.predict(temp_x[:1])[0], french_tokenizer))
###Output
Train on 110288 samples, validate on 27573 samples
Epoch 1/10
110288/110288 [==============================] - 30s 268us/step - loss: 1.3529 - acc: 0.6961 - val_loss: 0.4688 - val_acc: 0.8553
Epoch 2/10
110288/110288 [==============================] - 29s 261us/step - loss: 0.3545 - acc: 0.8837 - val_loss: 0.2892 - val_acc: 0.9021
Epoch 3/10
110288/110288 [==============================] - 29s 259us/step - loss: 0.2572 - acc: 0.9121 - val_loss: 0.2434 - val_acc: 0.9170
Epoch 4/10
110288/110288 [==============================] - 28s 258us/step - loss: 0.2231 - acc: 0.9226 - val_loss: 0.2150 - val_acc: 0.9260
Epoch 5/10
110288/110288 [==============================] - 28s 258us/step - loss: 0.2050 - acc: 0.9280 - val_loss: 0.2120 - val_acc: 0.9264
Epoch 6/10
110288/110288 [==============================] - 28s 257us/step - loss: 0.1939 - acc: 0.9311 - val_loss: 0.2043 - val_acc: 0.9294
Epoch 7/10
110288/110288 [==============================] - 28s 257us/step - loss: 0.1835 - acc: 0.9344 - val_loss: 0.1910 - val_acc: 0.9331
Epoch 8/10
110288/110288 [==============================] - 28s 257us/step - loss: 0.1774 - acc: 0.9361 - val_loss: 0.1894 - val_acc: 0.9333
Epoch 9/10
110288/110288 [==============================] - 28s 256us/step - loss: 0.1744 - acc: 0.9367 - val_loss: 0.1871 - val_acc: 0.9346
Epoch 10/10
110288/110288 [==============================] - 28s 256us/step - loss: 0.1707 - acc: 0.9378 - val_loss: 0.1847 - val_acc: 0.9347
new jersey est parfois calme en l' automne et il est neigeux en avril <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
###Markdown
Model 3: Bidirectional RNNs (IMPLEMENTATION)One restriction of a RNN is that it can't see the future input, only the past. This is where bidirectional recurrent neural networks come in. They are able to see the future data.
###Code
def bd_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a bidirectional RNN model on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# Build an basic Bidirectional Model with 1 hidden layer, 1 input and 1 output layer
# The learning was also tuned to increase the accuracy
#Input seq
input_seq = Input(shape=input_shape[1:])
# Hidden Layer
hidden_layer_1 = Bidirectional(GRU(output_sequence_length, return_sequences=True))(input_seq)
# Output Layer
output_layer = Dense(french_vocab_size, activation='softmax')(hidden_layer_1)
model = Model(inputs=input_seq, outputs=output_layer)
learning_rate = 0.01
# Model Compilation
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
tests.test_bd_model(bd_model)
# TODO: Train and Print prediction(s
temp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
temp_x = temp_x.reshape((-1, preproc_french_sentences.shape[-2], 1))
# TODO: Train the neural network
bd_model = bd_model(
temp_x.shape,
preproc_french_sentences.shape[1],
len(english_tokenizer.word_index) + 1,
len(french_tokenizer.word_index) + 1)
bd_model.fit(temp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
# TODO: Print prediction(s)
print(logits_to_text(bd_model.predict(temp_x[:1])[0], french_tokenizer))
###Output
Train on 110288 samples, validate on 27573 samples
Epoch 1/10
110288/110288 [==============================] - 11s 102us/step - loss: 2.4872 - acc: 0.5084 - val_loss: 1.7560 - val_acc: 0.5768
Epoch 2/10
110288/110288 [==============================] - 10s 89us/step - loss: 1.5871 - acc: 0.5957 - val_loss: 1.4699 - val_acc: 0.6147
Epoch 3/10
110288/110288 [==============================] - 10s 87us/step - loss: 1.4003 - acc: 0.6176 - val_loss: 1.3326 - val_acc: 0.6283
Epoch 4/10
110288/110288 [==============================] - 10s 87us/step - loss: 1.2872 - acc: 0.6391 - val_loss: 1.2483 - val_acc: 0.6454
Epoch 5/10
110288/110288 [==============================] - 10s 87us/step - loss: 1.2242 - acc: 0.6521 - val_loss: 1.2030 - val_acc: 0.6520
Epoch 6/10
110288/110288 [==============================] - 10s 88us/step - loss: 1.1888 - acc: 0.6567 - val_loss: 1.1746 - val_acc: 0.6575
Epoch 7/10
110288/110288 [==============================] - 10s 88us/step - loss: 1.1622 - acc: 0.6610 - val_loss: 1.1520 - val_acc: 0.6608
Epoch 8/10
110288/110288 [==============================] - 10s 88us/step - loss: 1.1417 - acc: 0.6648 - val_loss: 1.1378 - val_acc: 0.6646
Epoch 9/10
110288/110288 [==============================] - 10s 88us/step - loss: 1.1267 - acc: 0.6676 - val_loss: 1.1153 - val_acc: 0.6705
Epoch 10/10
110288/110288 [==============================] - 10s 88us/step - loss: 1.1123 - acc: 0.6707 - val_loss: 1.1053 - val_acc: 0.6708
new jersey est parfois parfois en mois et il est est en en <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
###Markdown
Model 4: Encoder-Decoder (OPTIONAL)Time to look at encoder-decoder models. This model is made up of an encoder and decoder. The encoder creates a matrix representation of the sentence. The decoder takes this matrix as input and predicts the translation as output.Create an encoder-decoder model in the cell below.
###Code
def encdec_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train an encoder-decoder model on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# OPTIONAL: Implement
return None
tests.test_encdec_model(encdec_model)
# OPTIONAL: Train and Print prediction(s)
###Output
_____no_output_____
###Markdown
Model 5: Custom (IMPLEMENTATION)Use everything you learned from the previous models to create a model that incorporates embedding and a bidirectional rnn into one model.
###Code
def model_final(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a model that incorporates embedding, encoder-decoder, and bidirectional RNN on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# Implemented a final model with 4 hidden layers, one embedding and one output layer
#Experimented with different number of units in the GRU layer in order to increase the accuracy of the model
#Experimented with learning rate hyperparameter to impprove the accuracy
#Input seq
input_seq = Input(shape=input_shape[1:])
#Embedding layer
embedding_layer = Embedding(english_vocab_size, output_sequence_length)(input_seq)
# 1st Hidden Layer
hidden_layer_1 = Bidirectional(GRU(256))(embedding_layer)
# 2nd Hidden Layer
hidden_layer_2 = Dense(256, activation='relu')(hidden_layer_1)
# 3rd Hidden layer
hidden_layer_3 = RepeatVector(output_sequence_length)(hidden_layer_2)
# 4th Hidden layer
hidden_layer_4 = Bidirectional(GRU(256, return_sequences=True))(hidden_layer_3)
# Output Layer
output_layer = TimeDistributed(Dense(french_vocab_size, activation='softmax'))(hidden_layer_4)
model = Model(inputs=input_seq, outputs=output_layer)
learning_rate = 0.01
#Model Compilation
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(lr=learning_rate),
metrics=['accuracy'])
return model
tests.test_model_final(model_final)
print('Final Model Loaded')
# TODO: Train the final model
###Output
Final Model Loaded
###Markdown
Prediction (IMPLEMENTATION)
###Code
def final_predictions(x, y, x_tk, y_tk):
"""
Gets predictions using the final model
:param x: Preprocessed English data
:param y: Preprocessed French data
:param x_tk: English tokenizer
:param y_tk: French tokenizer
"""
# TODO: Train neural network using model_final
x = pad(x, y.shape[1])
model = model_final(x.shape,
y.shape[1],
len(x_tk.word_index) + 1,
len(y_tk.word_index) + 1)
model.fit(x, y, batch_size=1024, epochs=20, validation_split=0.2)
## DON'T EDIT ANYTHING BELOW THIS LINE
y_id_to_word = {value: key for key, value in y_tk.word_index.items()}
y_id_to_word[0] = '<PAD>'
sentence = 'he saw a old yellow truck'
sentence = [x_tk.word_index[word] for word in sentence.split()]
sentence = pad_sequences([sentence], maxlen=x.shape[-1], padding='post')
sentences = np.array([sentence[0], x[0]])
predictions = model.predict(sentences, len(sentences))
print('Sample 1:')
print(' '.join([y_id_to_word[np.argmax(x)] for x in predictions[0]]))
print('Il a vu un vieux camion jaune')
print('Sample 2:')
print(' '.join([y_id_to_word[np.argmax(x)] for x in predictions[1]]))
print(' '.join([y_id_to_word[np.max(x)] for x in y[0]]))
final_predictions(preproc_english_sentences, preproc_french_sentences, english_tokenizer, french_tokenizer)
###Output
Train on 110288 samples, validate on 27573 samples
Epoch 1/20
110288/110288 [==============================] - 40s 360us/step - loss: 2.3031 - acc: 0.5046 - val_loss: 1.3894 - val_acc: 0.6240
Epoch 2/20
110288/110288 [==============================] - 37s 332us/step - loss: 1.1612 - acc: 0.6733 - val_loss: 1.5751 - val_acc: 0.6036
Epoch 3/20
110288/110288 [==============================] - 37s 332us/step - loss: 0.9920 - acc: 0.7114 - val_loss: 0.8394 - val_acc: 0.7452
Epoch 4/20
110288/110288 [==============================] - 37s 332us/step - loss: 0.7303 - acc: 0.7718 - val_loss: 0.6462 - val_acc: 0.7942
Epoch 5/20
110288/110288 [==============================] - 37s 332us/step - loss: 0.5679 - acc: 0.8191 - val_loss: 0.4876 - val_acc: 0.8440
Epoch 6/20
110288/110288 [==============================] - 37s 332us/step - loss: 0.4158 - acc: 0.8675 - val_loss: 0.3430 - val_acc: 0.8917
Epoch 7/20
110288/110288 [==============================] - 37s 332us/step - loss: 0.3055 - acc: 0.9082 - val_loss: 0.2315 - val_acc: 0.9337
Epoch 8/20
110288/110288 [==============================] - 37s 332us/step - loss: 0.2091 - acc: 0.9393 - val_loss: 0.2202 - val_acc: 0.9358
Epoch 9/20
110288/110288 [==============================] - 37s 332us/step - loss: 0.1831 - acc: 0.9462 - val_loss: 0.1770 - val_acc: 0.9497
Epoch 10/20
110288/110288 [==============================] - 36s 331us/step - loss: 0.1566 - acc: 0.9539 - val_loss: 0.1432 - val_acc: 0.9586
Epoch 11/20
110288/110288 [==============================] - 37s 331us/step - loss: 0.1381 - acc: 0.9594 - val_loss: 0.1386 - val_acc: 0.9587
Epoch 12/20
110288/110288 [==============================] - 37s 332us/step - loss: 0.1252 - acc: 0.9630 - val_loss: 0.1292 - val_acc: 0.9635
Epoch 13/20
110288/110288 [==============================] - 37s 332us/step - loss: 0.1189 - acc: 0.9650 - val_loss: 0.1286 - val_acc: 0.9622
Epoch 14/20
110288/110288 [==============================] - 37s 332us/step - loss: 0.1084 - acc: 0.9680 - val_loss: 0.1325 - val_acc: 0.9606
Epoch 15/20
110288/110288 [==============================] - 37s 333us/step - loss: 0.1223 - acc: 0.9639 - val_loss: 0.1068 - val_acc: 0.9689
Epoch 16/20
110288/110288 [==============================] - 37s 332us/step - loss: 0.0970 - acc: 0.9716 - val_loss: 0.1206 - val_acc: 0.9661
Epoch 17/20
110288/110288 [==============================] - 37s 332us/step - loss: 0.1490 - acc: 0.9566 - val_loss: 0.1447 - val_acc: 0.9590
Epoch 18/20
110288/110288 [==============================] - 37s 332us/step - loss: 0.1033 - acc: 0.9697 - val_loss: 0.1128 - val_acc: 0.9674
Epoch 19/20
110288/110288 [==============================] - 37s 332us/step - loss: 0.0910 - acc: 0.9731 - val_loss: 0.1049 - val_acc: 0.9696
Epoch 20/20
110288/110288 [==============================] - 37s 332us/step - loss: 0.0824 - acc: 0.9759 - val_loss: 0.0918 - val_acc: 0.9744
Sample 1:
il a vu un vieux camion jaune <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
Il a vu un vieux camion jaune
Sample 2:
new jersey est parfois calme pendant l' automne et il est neigeux en avril <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
new jersey est parfois calme pendant l' automne et il est neigeux en avril <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
###Markdown
SubmissionWhen you're ready to submit, complete the following steps:1. Review the [rubric](https://review.udacity.com/!/rubrics/1004/view) to ensure your submission meets all requirements to pass2. Generate an HTML version of this notebook - Run the next cell to attempt automatic generation (this is the recommended method in Workspaces) - Navigate to **FILE -> Download as -> HTML (.html)** - Manually generate a copy using `nbconvert` from your shell terminal```$ pip install nbconvert$ python -m nbconvert machine_translation.ipynb``` 3. Submit the project - If you are in a Workspace, simply click the "Submit Project" button (bottom towards the right) - Otherwise, add the following files into a zip archive and submit them - `helper.py` - `machine_translation.ipynb` - `machine_translation.html` - You can export the notebook by navigating to **File -> Download as -> HTML (.html)**. Generate the html**Save your notebook before running the next cell to generate the HTML output.** Then submit your project.
###Code
# Save before you run this cell!
!!jupyter nbconvert *.ipynb
###Output
_____no_output_____ |
convolutionalops.ipynb | ###Markdown
Convolutional Operations
###Code
Strictly speaking, they are not convolutions but cross-correlations.
###Output
_____no_output_____
###Markdown
import tensorflow as tfimport numpy as npsess = tf.Session()sess.run(tf.global_variables_initializer())BATCH_SIZE = 1HEIGHT = 4WIDTH = 4CHANNELS = 1 M = np.array([ [[[1], [2], [3], [4]], [[0], [1], [2], [3]], [[0], [0], [1], [2]], [[0], [0], [0], [1]]]], dtype=np.float32)
###Code
Define some filters we can use for conv2d with the above matrix.
###Output
_____no_output_____
###Markdown
filter_diagonal = np.array( [ [[[1]], [[0]]], [[[0]], [[1]]] ], dtype=np.float32)filter_vertical = np.array( [ [[[1]], [[-1]]], [[[1]], [[-1]]] ], dtype=np.float32)
###Code
Filters in `conv2d`
###Output
_____no_output_____
###Markdown
Now try `conv2d` on `M` with these filters:
###Code
inputs_tf = tf.placeholder(tf.float32, shape=[BATCH_SIZE, HEIGHT, WIDTH, CHANNELS], name='input')
outputs_tf = [tf.nn.conv2d(inputs_tf, filter=f, strides=[1, 2, 1, 1], padding='VALID')
for f in [filter_diagonal, filter_vertical]]
a, b = sess.run(outputs_tf, {inputs_tf: M})
a
b
###Output
_____no_output_____
###Markdown
We see that the first filter (`filter_diagonal`) sums the diagonal of the 2x2 submatrices and the other one rewards values where the left side is positive and the right side is non-positive.
###Code
Padding
###Output
_____no_output_____
###Markdown
The `SAME` padding extends the input when we reach the edge.:
###Code
sess.run(tf.nn.conv2d(inputs_tf, filter=filter_diagonal, strides=[1, 2, 1, 1], padding='SAME'),
{inputs_tf: M})
###Output
_____no_output_____
###Markdown
The `VALID` padding computes the valid submatrices only, stopping when we reach the inside edge:
###Code
sess.run(tf.nn.conv2d(inputs_tf, filter=filter_diagonal, strides=[1, 2, 1, 1], padding='VALID'),
{inputs_tf: M})
###Output
_____no_output_____ |
Demo numpi_series.ipynb | ###Markdown
`numpypi_series` is a wrapper around `numpy` that replaces a few instrinsic functions with the ambition of producing the same numerical results on different platforms under different versions of python.To use, replace```pythonimport numpy as np```with```pythonimport numpypi_series as np```You can still access the original `numpy` via `numpypi` using `numpy._numpy._numpy`.Here's a check that the "pass through" works:
###Code
# Check numpy functions are visible
print( numpy.arange(3) )
print( numpy._numpy._numpy.arange(3))
###Output
[0 1 2]
[0 1 2]
###Markdown
As far as we can tell `1/x` does reproduce across platforms but `y/x` does not. Hence, to reproducibly divide two numbers replace `z=y/x` with `r=1/x ; z=y*r`. In case `1/x` does not reproduce the reciprocal function, $f(x) = 1/x$, is coded iteratively in `numpypi`:
###Code
# Check reciprocal()
x = [1., 2., 0.5, 3., -3., 1./3, -1./3, 2**63, 2**(-63)]
y = numpy.reciprocal( x )
print('%23s'%'x', '%23s'%'1/x', '%23s'%'x*1/x - 1')
for i in range(len(x)):
print('%23.16e'%x[i], '%23.16e'%y[i], '%23.16e'%(x[i]*y[i]-1.) )
###Output
x 1/x x*1/x - 1
1.0000000000000000e+00 1.0000000000000000e+00 0.0000000000000000e+00
2.0000000000000000e+00 5.0000000000000000e-01 0.0000000000000000e+00
5.0000000000000000e-01 2.0000000000000000e+00 0.0000000000000000e+00
3.0000000000000000e+00 3.3333333333333331e-01 0.0000000000000000e+00
-3.0000000000000000e+00 -3.3333333333333331e-01 0.0000000000000000e+00
3.3333333333333331e-01 3.0000000000000000e+00 0.0000000000000000e+00
-3.3333333333333331e-01 -3.0000000000000000e+00 0.0000000000000000e+00
9.2233720368547758e+18 1.0842021724855044e-19 0.0000000000000000e+00
1.0842021724855044e-19 9.2233720368547758e+18 0.0000000000000000e+00
###Markdown
The square root function $f(x) = \sqrt{x}$ is also coded iteratively using the Legendre algorithm:
###Code
# Check sqrt()
x = [1., 4., 2., 0.5, 2**63, 2**(-63)]
y = numpy.sqrt( x )
print('%23s'%'x', '%23s'%'sqrt(x)', '%23s'%'(sqrt(x)**2 - x) / x')
for i in range(len(x)):
print('%23.16e'%x[i], '%23.16e'%y[i], '%23.16e'%((y[i]*y[i]-x[i])/x[i]) )
###Output
x sqrt(x) (sqrt(x)**2 - x) / x
1.0000000000000000e+00 1.0000000000000000e+00 0.0000000000000000e+00
4.0000000000000000e+00 2.0000000000000000e+00 0.0000000000000000e+00
2.0000000000000000e+00 1.4142135623730949e+00 -2.2204460492503131e-16
5.0000000000000000e-01 7.0710678118654746e-01 -2.2204460492503131e-16
9.2233720368547758e+18 3.0370004999760494e+09 -2.2204460492503131e-16
1.0842021724855044e-19 3.2927225399135959e-10 -2.2204460492503131e-16
###Markdown
Trigonometric functions are coded using series. They've been written for accuracy and agree with numpy to within $\epsilon \sim 2.2 \times 10^{-16}$
###Code
eps = numpy.finfo(1.).eps
print( 'eps =', eps )
# Check sin(), cos()
x = numpy.arange(-180-360*0,181+360*0)*(numpy.pi/180.)
s,S = numpy.sin(x), numpy._numpy._numpy.sin(x)
c,C = numpy.cos(x), numpy._numpy._numpy.cos(x)
fig, ax = plt.subplots(1,2,figsize=(10,4))
ax[0].plot(x*180/numpy.pi, s, label='sin(x)');
ax[0].plot(x*180/numpy.pi, c, label='cos(x)');
ax[0].legend(); ax[0].set_xlabel('x [$^\circ$]');
ax[1].plot(x*180/numpy.pi, s-S, label='sin');
ax[1].plot(x*180/numpy.pi, c-C, label='cos');
ax[1].legend(); ax[1].set_xlabel('x [$^\circ$]'); plt.title('Difference with numpy');
###Output
_____no_output_____
###Markdown
Mathematically $\sin^2(x) + \cos^2{x} = 1$ but numerically this is approximate for both `numpy` and `numpypi`.
###Code
# Check sin()**2 + cos()**2
x = numpy.arange(-180-360*0,181+360*0)*(numpy.pi/180.)
s,S = numpy.sin(x), numpy._numpy._numpy.sin(x)
c,C = numpy.cos(x), numpy._numpy._numpy.cos(x)
y,Y = s*s + c*c, S*S + C*C
plt.plot(x*180/numpy.pi, y-1, label='numpypi');
plt.plot(x*180/numpy.pi, Y-1, '--', label='numpy');
plt.xlabel('x [$^\circ$]'); plt.title('$sin^2(x)+cos^2(x) - 1$'); plt.legend();
# Special values
x = numpy.arange(-4,5)*0.25*numpy.pi
s = numpy.sin(x)
c = numpy.cos(x)
# s = numpy._numpy.numpy.sin(x)
# c = numpy._numpy.numpy.cos(x)
print('%23s'%'x/pi','%23s'%'sin(x)','%23s'%'cos(x)',)
for i in range(len(x)):
print('%23.16f'%(x[i]/numpy.pi),'%23.16e'%s[i],'%23.16e'%c[i])
###Output
x/pi sin(x) cos(x)
-1.0000000000000000 0.0000000000000000e+00 -1.0000000000000000e+00
-0.7500000000000000 -7.0710678118654757e-01 -7.0710678118654746e-01
-0.5000000000000000 -1.0000000000000000e+00 0.0000000000000000e+00
-0.2500000000000000 -7.0710678118654757e-01 7.0710678118654746e-01
0.0000000000000000 0.0000000000000000e+00 1.0000000000000000e+00
0.2500000000000000 7.0710678118654757e-01 7.0710678118654746e-01
0.5000000000000000 1.0000000000000000e+00 0.0000000000000000e+00
0.7500000000000000 7.0710678118654757e-01 -7.0710678118654746e-01
1.0000000000000000 -0.0000000000000000e+00 -1.0000000000000000e+00
###Markdown
$\sin(-x) = - \sin(x)$ so $\sin(x) + \sin(-x) = 0$:
###Code
# Check symmetry for sin()
x = numpy.linspace(0.,1.,1000)*numpy.pi
sp = numpy.sin(x)
sm = numpy.sin(-x)
y = sp + sm
plt.plot(180/numpy.pi*x, y );
numpy.abs(y).max()
###Output
_____no_output_____
###Markdown
$\cos(-x) = \cos(x)$ so $\cos(x) - \cos(-x) = 0$:
###Code
# Check symmetry for cos()
x = numpy.linspace(0.,1.,1000)*numpy.pi
cp = numpy.cos(x)
cm = numpy.cos(-x)
y = cp - cm
plt.plot(180/numpy.pi*x, y );
numpy.abs(y).max()
x = numpy.linspace(-numpy.pi/2*(1-1*eps), numpy.pi/2*(1-1*eps), 200)
fig, ax = plt.subplots(1,2,figsize=(10,4))
ax[0].plot(x*180/numpy.pi, numpy.tan(x), label='numpypi' );
ax[0].plot(x*180/numpy.pi, numpy._numpy._numpy.tan(x), '--', label='numpy' );
ax[0].set_ylim(-20,20); ax[0].legend(); ax[0].set_xlabel('x [$^\circ$]'), ax[0].set_title('$tan(x)$')
ax[1].semilogy(x*180/numpy.pi, numpy.abs( numpy.tan(x) - numpy._numpy._numpy.tan(x) ) );
ax[0].set_xlabel('x [$^\circ$]'), ax[1].set_title('Magnitude of difference with numpy')
x = numpy.linspace(-numpy.pi*3+1e-2, numpy.pi*3-1e-2, 2000)
plt.plot(x*180/numpy.pi, numpy.tan(x) );
plt.ylim(-20,20)
###Output
_____no_output_____
###Markdown
$\tan(-x) = - \tan(x)$ so $\tan(x) + \tan(-x) = 0$:
###Code
# Check symmetry for tan()
x = numpy.linspace(0.,1.,1000)*0.5*numpy.pi
tp = numpy.tan(x)
tm = numpy.tan(-x)
y = tp + tm
plt.plot(180/numpy.pi*x, y );
numpy.abs(y).max()
# More special values
print('Original numpy results')
print( numpy._numpy._numpy.cos( numpy.pi/4 ) - 0.5*numpy._numpy._numpy.sqrt(2) )
print( numpy._numpy._numpy.sin( numpy.pi/4 ) - 0.5*numpy._numpy._numpy.sqrt(2) )
print( numpy._numpy._numpy.tan( numpy.pi/4 ) - 1.0 )
print('numpypi results')
print( numpy.cos( numpy.pi/4 ) - 0.5*numpy.sqrt(2) )
print( numpy.sin( numpy.pi/4 ) - 0.5*numpy.sqrt(2) )
print( numpy.tan( numpy.pi/4 ) - 1.0 )
# sqrt(x) for various ranges
plt.figure(figsize=(10,8))
x = numpy.linspace(0,1,1000)
plt.subplot(321)
plt.plot(x, numpy._numpy._numpy.sqrt(x), label='numpy');
plt.plot(x, numpy.sqrt(x), label='series');
plt.legend();
plt.subplot(322)
plt.plot(x, ( numpy.sqrt(x) - numpy._numpy._numpy.sqrt(x) ) / numpy.maximum(eps, numpy.sqrt(x)));
x = numpy.linspace(0,1e300,10000)
plt.subplot(323)
plt.plot(x, numpy._numpy._numpy.sqrt(x), label='numpy');
plt.plot(x, numpy.sqrt(x), label='series');
plt.legend();
plt.subplot(324)
plt.plot(x, ( numpy.sqrt(x) - numpy._numpy._numpy.sqrt(x) ) / numpy.maximum(eps, numpy.sqrt(x)));
x = numpy.linspace(0,1000*eps,1000)
plt.subplot(325)
plt.plot(x, numpy._numpy._numpy.sqrt(x), label='numpy');
plt.plot(x, numpy.sqrt(x), label='series');
plt.legend();
plt.subplot(326)
plt.plot(x, ( numpy.sqrt(x) - numpy._numpy._numpy.sqrt(x) ) / numpy.maximum(eps, numpy.sqrt(x)));
print( 'sqrt(0) =', numpy.sqrt(0.) )
# arcsin()
plt.figure(figsize=(10,4))
x = numpy.linspace(-1,1,1000)
plt.subplot(121)
plt.plot(x, numpy._numpy._numpy.arcsin(x)*180/numpy.pi, label='numpy');
plt.plot(x, numpy.arcsin(x)*180/numpy.pi, label='series');
plt.legend();
plt.subplot(122)
plt.plot(x, ( numpy.arcsin(x) - numpy._numpy._numpy.arcsin(x) ) / numpy.maximum(eps, numpy.abs(numpy.arcsin(x)) ) );
# arctan()
plt.figure(figsize=(10,4))
x = numpy.linspace(-10,10,1000)
plt.subplot(121)
plt.plot(x, numpy._numpy._numpy.arctan(x)*180/numpy.pi, label='numpy');
plt.plot(x, numpy.arctan(x)*180/numpy.pi, label='series');
plt.legend();
plt.subplot(122)
plt.plot(x, ( numpy.arctan(x) - numpy._numpy._numpy.arctan(x) ) / numpy.maximum(eps, numpy.abs(numpy.arctan(x)) ) );
# arctan2()
plt.figure(figsize=(10,4))
a = numpy.linspace(-numpy.pi+0*eps,numpy.pi-0*eps,10001)
x,y = numpy.cos(a), numpy.sin(a)
plt.subplot(121)
plt.plot(a*180/numpy.pi, numpy._numpy._numpy.arctan2(y,x), label='numpy');
plt.plot(a*180/numpy.pi, numpy.arctan2(y,x), '--', label='series');
plt.legend();
plt.subplot(122)
plt.plot(a*180/numpy.pi, ( numpy.arctan2(y,x) - numpy._numpy._numpy.arctan2(y,x) ) / numpy.maximum(eps, numpy.abs(numpy.arctan2(y,x)) ) );
x,y = x[[0,-1]],y[[0,-1]]
x, y, numpy._numpy._numpy.arctan2(y,x), numpy.arctan2(y,x)
print( numpy.arctan2(0.*x,1.+0*x))
print(x[-1],y[-1], y[-1]==0, y[-1]<0)
# arccos()
plt.figure(figsize=(10,4))
x = numpy.linspace(-1,1,1000)
plt.subplot(121)
plt.plot(x, numpy._numpy._numpy.arccos(x)*180/numpy.pi, label='numpy');
plt.plot(x, numpy.arccos(x)*180/numpy.pi, label='series');
plt.legend();
plt.subplot(122)
plt.plot(x, ( numpy.arccos(x) - numpy._numpy._numpy.arccos(x) ) / numpy.maximum(eps, numpy.abs(numpy.arccos(x)) ) );
###Output
_____no_output_____ |
notebooks/run_best_model_dgl-ke.ipynb | ###Markdown
Run best modelTake model config from best model in `dglke_results` and train a model with the same parameters on the full dataset.
###Code
import numpy as np
import itertools
import datetime
import json
import os
###Output
_____no_output_____
###Markdown
1. Get parameters
###Code
# SMG
best_model_config = "RotatE_heritageconnector_10"
# V&A
best_model_config = "RotatE_heritageconnector_2"
with open(f"./dglke_results/{best_model_config}/config.json") as f:
p = json.load(f)
p
###Output
_____no_output_____
###Markdown
2. Run DGL-KE on each of the parameter sets
###Code
# fixed params
DATA_PATH = "../data/interim/"
TRAIN_FILENAME = "triples_filtered_by_predicate.csv"
SAVE_AND_LOGS_PATH="./dglke_best_model"
DATASET="heritageconnector"
FORMAT="raw_udd_hrt"
LOG_INTERVAL=10000
BATCH_SIZE_EVAL=16
NEG_SAMPLE_SIZE_EVAL=1000
N_EPOCHS=1000
# SMG
# N_TRIPLES=2793238 # 19.07
# V&A
N_TRIPLES=5095636
# delete old results and logs folders
! rm -rf {SAVE_AND_LOGS_PATH}
# run experiment
!mkdir dglke_best_model
"""
Explanation for (some) parameters:
- max_step: we convert from n_epochs to n_steps by doing n_epochs*(n_triples/batch_size)
- de: double entity dimension, as RotatE entities have a complex representation
"""
print(f"---TRAINING {best_model_config}---")
filename = f"{SAVE_AND_LOGS_PATH}/logs.txt"
neg_adv_flag = '-adv' if p['neg_adversarial_sampling'] else ''
!DGLBACKEND=pytorch dglke_train --model_name {p['model']} -de --data_path {DATA_PATH} --save_path {SAVE_AND_LOGS_PATH} --dataset {DATASET} --format {FORMAT} \
--data_files {TRAIN_FILENAME} --delimiter ' ' --max_step {int(N_TRIPLES/p['batch_size']*N_EPOCHS)} \
--log_interval {LOG_INTERVAL} --batch_size {p['batch_size']} --neg_sample_size {p['neg_sample_size']} \
--lr {p['lr']} {neg_adv_flag} --hidden_dim {p['emb_size']} -rc {p['regularization_coef']} -g {p['gamma']} \
--gpu 0 --mix_cpu_gpu --async_update |& tee {filename}
###Output
---TRAINING RotatE_heritageconnector_10---
!DGLBACKEND=pytorch dglke_train --model_name RotatE --data_path ../data/interim/ --save_path ./dglke_best_model --dataset heritageconnector --format raw_udd_hrt --data_files triples_filtered_by_predicate.csv --delimiter ' ' --max_step 349154 --log_interval 10000 --batch_size 8000 --neg_sample_size 10 --lr 0.01 -adv --hidden_dim 400 -rc 2e-06 -g 5.0 --gpu 0 --mix_cpu_gpu --async_update |& tee ./dglke_best_model/logs.txt
|
CESM2_COSP/correlation_test.ipynb | ###Markdown
Testing correlation function
###Code
import sys
# Add common resources folder to path
sys.path.append('/glade/u/home/jonahshaw/Scripts/git_repos/CESM2_analysis')
sys.path.append('/glade/u/home/jonahshaw/Scripts/git_repos/CESM2_analysis/Common/')
# sys.path.append("/home/jonahks/git_repos/netcdf_analysis/Common/")
from imports import (
pd, np, xr, mpl, plt, sns, os,
datetime, sys, crt, gridspec,
ccrs, metrics, Iterable, cmaps,
mpl,glob
)
from functions import (
masked_average, add_weights, sp_map,
season_mean, get_dpm, leap_year, share_ylims,
to_png
)
import cftime
from cloud_metric import Cloud_Metric
from collections import deque
%matplotlib inline
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Taylor plot specific imports
###Code
sys.path.append('/glade/u/home/jonahshaw/Scripts/git_repos/CESM2_analysis/CESM2_COSP/taylor_plots/')
import taylor_jshaw as taylor
import matplotlib as matplotlib
import matplotlib.patches as patches
from interp_functions import *
from functions import calculate
###Output
_____no_output_____
###Markdown
I am reorganizing to have everything in this first directory
###Code
# where to save processed files
save_dir = '/glade/u/home/jonahshaw/w/archive/taylor_files/'
og_dir = '/glade/u/home/jonahshaw/w/kay2012_OGfiles'
###Output
_____no_output_____
###Markdown
Different files for use Obs
###Code
misr_og = xr.open_dataset('%s/2012_obs/MISR.CLDTOT_MISR.nc' % save_dir)['CLDTOT_MISR']
misr_new = xr.open_dataset('%s/2021_obs/MISR_CLDTOT_200003_202005.nc' % save_dir)['cltMISR']
# Not masked
misr_new_avg = misr_new.groupby('time.month').mean('time').mean('month')
# Fix longitude
misr_new_flip = misr_new_avg.assign_coords(lon=(misr_new_avg.lon % 360)).sortby('lon')
misr_new_m = misr_new_flip.where(misr_new_flip!=0).interp_like(misr_og)
misr_cam5 = xr.open_dataset('%s/CAM5.CLDTOT_MISR.nc' % og_dir)['CLDTOT_MISR']
misr_cam5_intrp = misr_cam5.interp_like(misr_og)
fig,axs = plt.subplots(1,3,figsize=(14,5))
misr_og.plot(ax=axs[0])
misr_new_m.plot(ax=axs[1])
misr_cam5_intrp.plot(ax=axs[2])
###Output
_____no_output_____
###Markdown
The correlation calculation is clearly not working.
###Code
calculate(misr_cam5_intrp,misr_og)
calculate(misr_new_m,misr_og)
_misr_og = add_weights(misr_og)
mask = np.bitwise_or(xr.ufuncs.isnan(cntl),xr.ufuncs.isnan(test)) # mask means hide
misr_new60 = misr_new_m.where(np.abs(misr_new_m.lat)<60)
misr_og60 = misr_og.where(np.abs(misr_og.lat)<60)
###Output
_____no_output_____
###Markdown
Still bad when I remove high latitudes.
###Code
calculate(misr_new60,misr_og60)
calculate(misr_new60,misr_cam5_intrp)
calculate(misr_cam5_intrp,misr_og60)
###Output
_____no_output_____
###Markdown
Now the correlation looks right... These values should not be so different. 7% is a lot.
###Code
(misr_new60-misr_og60).mean()
###Output
_____no_output_____
###Markdown
They should be much higher correlated
###Code
calculate(misr_new60,misr_og60)
calculate(misr_new_m,misr_og)
misr_new60.plot()
misr_og60.plot()
###Output
_____no_output_____
###Markdown
Models
###Code
misr_cam4 = xr.open_dataset('%s/cam4_1deg_release_amip/cam4_1deg_release_amip.CLDTOT_MISR.nc' % save_dir)
misr_cam5 = xr.open_dataset('%s/cam5_1deg_release_amip/cam5_1deg_release_amip.CLDTOT_MISR.nc' % save_dir)
misr_cam6 = xr.open_dataset('%s/f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1/f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.CLDTOT_MISR.nc' % save_dir)
ls $save_dir/f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1
###Output
f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.cam.h0.CLDHGH_CAL.197901-201412.nc
f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.cam.h0.CLDLOW_CAL.197901-201412.nc
f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.cam.h0.CLDMED_CAL.197901-201412.nc
f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.cam.h0.CLDTOT_CAL.197901-201412.nc
f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.cam.h0.CLDTOT_ISCCP.197901-201412.nc
f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.cam.h0.LANDFRAC.197901-201412.nc
f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.cam.h0.LWCF.197901-201412.nc
f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.cam.h0.SWCF.197901-201412.nc
f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.CLDTHCK_MISR.nc
f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.CLDTHCK_MODIS.nc
f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.CLDTOT_MISR.nc
[0m[01;34mold[0m/
|
Python/jwt_creation_and_validation.ipynb | ###Markdown
JWT 이해: 토큰 생성과 유효성 확인 과정API 서비스를 개발하고 이에 대한 접근 권한을 제어하기 위하여 JSON Web Token(JWT)을 활용할 수 있습니다. 이 문서에서는 JWT 토큰의 생성과 유효성 확인 과정을 그림과 Python 코드를 사용하여 설명합니다. 전자서명 알고리즘으로는 HS256을 사용하였습니다.
###Code
import base64
import hashlib
import hmac
def base64_encode(input_as_bytes):
b = base64.urlsafe_b64encode(input_as_bytes).decode('utf-8')
return b.rstrip('=')
def base64_decode(input_as_string):
padding = 4 - len(input_as_string) % 4
input_as_string = input_as_string + '=' * padding
return base64.urlsafe_b64decode(input_as_string.encode('utf-8')).decode('utf-8')
###Output
_____no_output_____
###Markdown
토큰 생성 입력* header object```{ "typ": "JWT", "alg": "HS256"}```* payload object```{ "iss": "fun-with-jwts", "sub": "AzureDiamond", "jti": "f6c1097f-cc48-4949-a627-8b94fc5e37ba", "iat": 1596185001, "exp": 1596185061}```* secret```my-secret``` 출력* token```eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJmdW4td2l0aC1qd3RzIiwic3ViIjoiQXp1cmVEaWFtb25kIiwianRpIjoiZjZjMTA5N2YtY2M0OC00OTQ5LWE2MjctOGI5NGZjNWUzN2JhIiwiaWF0IjoxNTk2MTg1MDAxLCJleHAiOjE1OTYxODUwNjF9.UXvXY97CNcHv7LobrBagePBPeGiW2F-Z-nuINSmUy5k```
###Code
def create_jwt_token(header_obj_str, payload_obj_str, secret):
header = base64_encode(header_obj_str.encode('utf-8'))
payload = base64_encode(payload_obj_str.encode('utf-8'))
header_plus_payload = f'{header}.{payload}'
m = hmac.new(secret.encode('utf-8'), digestmod=hashlib.sha256)
m.update(header_plus_payload.encode('utf-8'))
d = m.digest()
signature = base64_encode(d)
jwt_token = f'{header_plus_payload}.{signature}'
return jwt_token
header_obj_str = '{\
"typ":"JWT",\
"alg":"HS256"\
}'
payload_obj_str = '{\
"iss":"fun-with-jwts",\
"sub":"AzureDiamond",\
"jti":"f6c1097f-cc48-4949-a627-8b94fc5e37ba",\
"iat":1596185001,\
"exp":1596185061\
}'
secret = 'my-secret'
jwt_token = create_jwt_token(header_obj_str, payload_obj_str, secret)
print('** JWT token **')
print(jwt_token)
###Output
** JWT token **
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJmdW4td2l0aC1qd3RzIiwic3ViIjoiQXp1cmVEaWFtb25kIiwianRpIjoiZjZjMTA5N2YtY2M0OC00OTQ5LWE2MjctOGI5NGZjNWUzN2JhIiwiaWF0IjoxNTk2MTg1MDAxLCJleHAiOjE1OTYxODUwNjF9.UXvXY97CNcHv7LobrBagePBPeGiW2F-Z-nuINSmUy5k
###Markdown
토큰 유효성 확인 입력* token```eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJmdW4td2l0aC1qd3RzIiwic3ViIjoiQXp1cmVEaWFtb25kIiwianRpIjoiZjZjMTA5N2YtY2M0OC00OTQ5LWE2MjctOGI5NGZjNWUzN2JhIiwiaWF0IjoxNTk2MTg1MDAxLCJleHAiOjE1OTYxODUwNjF9.UXvXY97CNcHv7LobrBagePBPeGiW2F-Z-nuINSmUy5k```* secret```my-secret``` 출력* is_valid```True```
###Code
def validate_jwt_token(token, secret):
pos = token.rfind('.')
header_plus_payload = token[:pos]
signature = token[pos+1:]
m = hmac.new(secret.encode('utf-8'), digestmod=hashlib.sha256)
m.update(header_plus_payload.encode('utf-8'))
d = m.digest()
signature_derived = base64_encode(d)
return signature_derived == signature
token = 'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJmdW4td2l0aC1qd3RzIiwic3ViIjoiQXp1cmVEaWFtb25kIiwianRpIjoiZjZjMTA5N2YtY2M0OC00OTQ5LWE2MjctOGI5NGZjNWUzN2JhIiwiaWF0IjoxNTk2MTg1MDAxLCJleHAiOjE1OTYxODUwNjF9.UXvXY97CNcHv7LobrBagePBPeGiW2F-Z-nuINSmUy5k'
secret = 'my-secret'
is_valid = validate_jwt_token(token, secret)
print(f'** is_valid: {is_valid} **')
###Output
** is_valid: True **
|
Week2/2_sets_hw.ipynb | ###Markdown
**IMPORTANT: ** When submitting this homework notebook, please modify only the cells that start with:```python modify this cell``` Import StuffNotice that we do not import *numpy* or *scipy* neither of these packages are need for this homework. For our solutions, the only command we needed to import was `itertools.product()`
###Code
import itertools
from itertools import product
###Output
_____no_output_____
###Markdown
SetsRead the notebook on sets before attempting these exercises Problem 1 De Morgan's first law states the following for any two sets $A$ and $B$$$(A\cup B)^c = A^c\cap B^c$$In the following two exercises we calculate $(A\cup B)^c$ in two different ways. Both functions must take $A$, $B$ and the universal set $U$ as their inputs. Exercise 1.1 Write the function **complement_of_union** that first determines $A\cup B$ and then evaluates the complement of this set. Output the tuple: $\begin{pmatrix}A\cup B,\, (A\cup B)^c\end{pmatrix}$. **Code**```pythonA = {1, 2, 3}B = {3, -6, 2, 0}U = {-10, -9, -8, -7, -6, 0, 1, 2, 3, 4}complement_of_union(A, B, U)``` **Output**```({-6, 0, 1, 2, 3}, {-10, -9, -8, -7, 4})```
###Code
# modify this cell
def complement_of_union(A, B, U):
# inputs: A, B and U are of type 'set'
# output: a tuple of the type (set, set)
AuB = A.union(B)
return ( AuB, U.difference(AuB) )
# Check Function
A = {1, 2, 3, 4, 5}
B = {0, 2, -6, 5, 8, 9}
U = A|B|{-3, 7, 10, -4}
assert( complement_of_union(A, B, U) == ({-6, 0, 1, 2, 3, 4, 5, 8, 9}, {-4, -3, 7, 10}) )
#
# AUTOGRADER TEST - DO NOT REMOVE
#
A = { -6, 3, 4, 5}
B = { -6, 5, 13 }
U = A | B | { 12, -2, -4}
complement_of_union(A,B,U)
###Output
_____no_output_____
###Markdown
Exercsise 1.2Write the function **intersection_of_complements** that first determines $A^c$ and $B^c$ and then evaluates the intersection of their complements. Output the tuple: $\begin{pmatrix}A^c, \, A^c\cap B^c\end{pmatrix}$ **Code**```pythonA = {1, 2, 3}B = {3, -6, 2, 0}U = {-10, -9, -8, -7, -6, 0, 1, 2, 3, 4}intersection_of_complements(A, B, U)``` **Output**```({-10, -9, -8, -7, -6, 0, 4}, {-10, -9, -8, -7, 4})```
###Code
# modify this cell
def intersection_of_complements(A, B, U):
# inputs: A, B and U are of type 'set'
# output: a tuple of the form (set, set)
return ( U.difference(A), U.difference(A).intersection(U.difference(B)))
# Check Function
A = {1, 2, 3, 4, 5}
B = {0, 2, -6, 5, 8, 9}
U = A|B|{-3, 7, 10, -4}
assert( intersection_of_complements(A, B, U) == ({-6, -4, -3, 0, 7, 8, 9, 10}, {-4, -3, 7, 10}) )
#
# AUTOGRADER TEST - DO NOT REMOVE
#
intersection_of_complements(A, B, U) == ({-6, -4, -3, 0, 7, 8, 9, 10}, {-4, -3, 7, 10})
A = { -6, 3, 4, 5}
B = { -6, 5, 13 }
U = A | B | { 12,-2,-4}
intersection_of_complements(A,B,U)
###Output
_____no_output_____
###Markdown
Problem 2 This problem illustrates a property of cartesian products of unions of two or more sets. For four sets $A$, $B$, $S$ and $T$, the following holds:$$(A\cup B)\times(S\cup T) = (A\times S)\cup(A\times T)\cup(B\times S)\cup(B\times T)$$Write the following functions to determine $(A\cup B)\times(S\cup T)$ in two different ways. Exercies 2.1Write function **product_of_unions** that first determines $(A\cup B)$ and $(S\cup T)$ and then evaluates the cartesian products of these unions. Output the tuple $\begin{pmatrix}(A\cup B),\, (A\cup B)\times(S\cup T)\end{pmatrix}$. **Code**```pythonA = {1, 2}B = {1, 3}S = {-1, 0}T = {0, 10}product_of_unions(A, B, S, T)``` **Output**```({1, 2, 3}, {(1, -1), (1, 0), (1, 10), (2, -1), (2, 0), (2, 10), (3, -1), (3, 0), (3, 10)})```
###Code
# modify this cell
def cartesian_product(A,B):
C = set()
for x in A:
for y in B:
C.add((x,y))
return C
def product_of_unions(A, B, S, T):
# inputs: A, B, S and T are sets
# output: a tuple of the type (set, set)
AuB = A.union(B)
SuT = S.union(T)
return ( A.union(B), cartesian_product(AuB,SuT) )
# modify this cell
# Check Function
A = {5}
B = {5, 6}
S = {-1, 0, 1}
T = {1, 2}
assert( product_of_unions(A, B, S, T) == \
({5, 6}, {(5, -1), (5, 0), (5, 1), (5, 2), (6, -1), (6, 0), (6, 1), (6, 2)}) )
#
# AUTOGRADER TEST - DO NOT REMOVE
#
product_of_unions( set({5}), set({5}), set({-1,0}), set({0}))
###Output
_____no_output_____
###Markdown
Exercise 2.2Write a function **union_of_products** that first determines $(A\times S)$ and the other three cartesian products that appear on the right hand side of the identity above, then evaluates the union of these cartesian products. Output the tuple $\begin{pmatrix}(A\times S),\, (A\times S)\cup(A\times T)\cup(B\times S)\cup(B\times T)\end{pmatrix}$. **Code**```pythonA = {1, 2}B = {1, 3}S = {-1, 0}T = {0, 10}union_of_products(A, B, S, T)``` **Output**```({(1, -1), (1, 0), (2, -1), (2, 0)}, {(1, -1), (1, 0), (1, 10), (2, -1), (2, 0), (2, 10), (3, -1), (3, 0), (3, 10)})```
###Code
# modify this cell
def union_of_products(A, B, S, T):
# inputs: A, B, S and T are sets
# output: a tuple of the type (set, set)
AxS = cartesian_product(A,S)
AxT = cartesian_product(A,T)
BxS = cartesian_product(B,S)
BxT = cartesian_product(B,T)
return ( AxS, AxS.union(AxT).union(BxS).union(BxT))
# Check Function
A = {5}
B = {5, 6}
S = {-1, 0, 1}
T = {1, 2}
assert( union_of_products(A, B, S, T) == \
({(5, -1), (5, 0), (5, 1)}, \
{(5, -1), (5, 0), (5, 1), (5, 2), (6, -1), (6, 0), (6, 1), (6, 2)}) \
)
#
# AUTOGRADER TEST - DO NOT REMOVE
#
union_of_products( {5}, {5}, {-1,0}, {0})
###Output
_____no_output_____ |
advent_of_code_2018/Day13 maze.ipynb | ###Markdown
learning: modifying lists when looping can be done with indexing over a slice of the array [:], since it will change inplacesorting a list based on key carts.sort(key=lambda x:x[0][1][1])use of a counterdef check_collision(carts): common_list, appearances = Counter([tuple(x[0]) for x in carts]).most_common(1)[0] Note 1 if appearances > 1: print(common_list, appearances, 'appeared twice') ([397, 994, 135, 941], 4) return (True,common_list) else: print('No list appears more than 3 times!') return (False,0) enumerate numpy getting the indicesnp.ndenumerate(maze):processing all lines from a filef=open('`day13inputmaze.txt') not with read because thats probably the whole filelines = [line.rstrip('\n') for line in f]
###Code
from collections import Counter
import sys
f=open('`day13inputmaze.txt') #not with read because thats probably the whole file
lines = [line.rstrip('\n') for line in f]
np.unique(maze)
maze = np.chararray(unicode=True,shape=(len(lines),len(lines[0])))
for x,line in enumerate(lines):
#print ((line))
maze[x]=list(line)
translate = {
'>':'-',
'<':'-',
'^':'|',
'v':'|'
}
next_intersection = {
'left':'straight',
'straight':'right',
'right':'left'
}
intersection_right = {
'>':'v',
'<':'^',
'v':'<',
'^':'>'
}
intersection_left = {
'>':'^',
'<':'v',
'v':'>',
'^':'<'
}
#\\
turn_one = {
'>':'v',
'<':'^',
'v':'>',
'^':'<'
}
#/
turn_two = {
'>':'^',
'<':'v',
'v':'<',
'^':'>'
}
next_location = {
'>':(0,1),
'<':(0,-1),
'v':(1,0),
'^':(-1,0)
}
def move_cart(cart):
currentlocation = cart[0][1]
#print (cart,'\n',maze[currentlocation])
if maze[currentlocation]==' ':
sys.exit()
if maze[currentlocation]=='+':
#change direction
if cart[2][1]=='left':
#print (intersection_left[cart[1][1]])
cart[1]=('direction',intersection_left[cart[1][1]])
if cart[2][1]=='right':
cart[1]=('direction',intersection_right[cart[1][1]])
#change nextturn
cart[2]=('nextturn',next_intersection[cart[2][1]])
if maze[currentlocation]=='\\':
#change direction
#print ('\\',turn_one[cart[1][1]])
cart[1]=('direction',turn_one[cart[1][1]])
if maze[currentlocation]=='/':
#change direction
#print ('/',turn_two[cart[1][1]])
cart[1]=('direction',turn_two[cart[1][1]])
#print (cart[1])
#print (currentlocation)
newlocation = (currentlocation[0]+next_location[cart[1][1]][0],currentlocation[1]+next_location[cart[1][1]][1])
#print (newlocation)
if newlocation[1]<0:
sys.exit()
#print (newlocation)
cart[0]=('location',newlocation)
#print (cart,'\n')
return cart
def check_collision(carts):
common_list, appearances = Counter([tuple(x[0]) for x in carts]).most_common(1)[0] # Note 1
if appearances > 1:
print(common_list, appearances, 'appeared twice') # ([397, 994, 135, 941], 4)
return (True,common_list)
else:
#print('No list appears more than 3 times!')
return (False,0)
def tick(carts):
carts.sort(key=lambda x:x[0][1][1])
carts.sort(key=lambda x:x[0][1][0])
#print (len(carts))
#print ([c for c in carts])
for i,cart in enumerate(carts[:]):
#print('car'+str(i))
cart = move_cart(cart)
crash = check_collision(carts)
if crash[0]:
carts= [c for c in carts if c[0] != crash[1]]
print (len(carts))
#print ('tickdone')
return carts
carts = []
for index,value in np.ndenumerate(maze):
if value in translate:
#print(index,value)
carts.append([('location',index),('direction',value),('nextturn','left')])
maze[index]=translate[value]
#print (translate[value])
print (len(carts))
for x in range(10000000):
if x%10000==0: print (x)
carts = tick(carts).copy()
if len(carts)==1:
print ('1 left',carts)
sys.exit()
carts= [[('location', (0, 1)), ('direction', '>'), ('nextturn', 'left')],[('location', (2, 3)), ('direction', '<'), ('nextturn', 'left')],[('location', (2, 3)), ('direction', '<'), ('nextturn', 'left')]]
carts
for c in carts:
print (c[0][1])
if c[0]==(2, 3):
print ('vo')
carts.remove(c)
carts
carts = [c for c in carts if c[0]!=(2, 3)]
carts
###Output
_____no_output_____ |
2_aging_signature/archive_initial_submission/.ipynb_checkpoints/DE_tissue_droplet-checkpoint.ipynb | ###Markdown
Load data
###Code
# Data path
data_path = '/data3/martin/tms_gene_data'
output_folder = data_path + '/DE_result'
# Load the data
adata_combine = util.load_normalized_data(data_path)
temp_facs = adata_combine[adata_combine.obs['b_method']=='facs',]
temp_droplet = adata_combine[adata_combine.obs['b_method']=='droplet',]
###Output
_____no_output_____
###Markdown
Generate a list of tissues for DE testing
###Code
tissue_list = list(set(temp_droplet.obs['tissue']))
min_cell_number = 1
analysis_list = []
analysis_info = {}
# for cell_type in cell_type_list:
for tissue in tissue_list:
analyte = tissue
ind_select = (temp_droplet.obs['tissue'] == tissue)
n_young = (temp_droplet.obs['age'][ind_select].isin(['1m', '3m'])).sum()
n_old = (temp_droplet.obs['age'][ind_select].isin(['18m', '21m',
'24m', '30m'])).sum()
analysis_info[analyte] = {}
analysis_info[analyte]['n_young'] = n_young
analysis_info[analyte]['n_old'] = n_old
if (n_young>min_cell_number) & (n_old>min_cell_number):
print('%s, n_young=%d, n_old=%d'%(analyte, n_young, n_old))
analysis_list.append(analyte)
###Output
Tongue, n_young=12044, n_old=8613
Heart_and_Aorta, n_young=1362, n_old=6554
Lung, n_young=6541, n_old=21216
Spleen, n_young=7844, n_old=21478
Liver, n_young=3234, n_old=3246
Bladder, n_young=3450, n_old=5367
Limb_Muscle, n_young=8210, n_old=16759
Thymus, n_young=1145, n_old=6425
Kidney, n_young=4317, n_old=14784
Marrow, n_young=6842, n_old=35099
Mammary_Gland, n_young=4343, n_old=7049
###Markdown
DE using R package MAST
###Code
## DE testing
gene_name_list = np.array(temp_droplet.var_names)
DE_result_MAST = {}
for i_analyte,analyte in enumerate(analysis_list):
print(analyte, '%d/%d'%(i_analyte, len(analysis_list)))
tissue = analyte
ind_select = (temp_droplet.obs['tissue'] == tissue)
adata_temp = temp_droplet[ind_select,]
# reformatting
adata_temp.X = np.array(adata_temp.X.todense())
adata_temp.obs['condition'] = [int(x[:-1]) for x in adata_temp.obs['age']]
adata_temp.obs = adata_temp.obs[['condition', 'sex']]
if len(set(adata_temp.obs['sex'])) <2:
covariate = ''
else:
covariate = '+sex'
# # toy example
# covariate = ''
# np.random.seed(0)
# ind_select = np.random.permutation(adata_temp.shape[0])[0:100]
# ind_select = np.sort(ind_select)
# adata_temp = adata_temp[ind_select, 0:3]
# adata_temp.X[:,0] = (adata_temp.obs['sex'] == 'male')*3
# adata_temp.X[:,1] = (adata_temp.obs['condition'])*3
# DE using MAST
R_cmd = util.call_MAST_age()
get_ipython().run_cell_magic(u'R', u'-i adata_temp -i covariate -o de_res', R_cmd)
de_res.columns = ['gene', 'raw-p', 'coef', 'bh-p']
de_res.index = de_res['gene']
DE_result_MAST[analyte] = pd.DataFrame(index = gene_name_list)
DE_result_MAST[analyte] = DE_result_MAST[analyte].join(de_res)
# fc between yound and old
X = adata_temp.X
y = (adata_temp.obs['condition']>10)
DE_result_MAST[analyte]['fc'] = X[y,:].mean(axis=0) - X[~y,:].mean(axis=0)
# break
###Output
Tongue 0/11
Heart_and_Aorta 1/11
Lung 2/11
Spleen 3/11
Liver 4/11
Bladder 5/11
Limb_Muscle 6/11
Thymus 7/11
Kidney 8/11
Marrow 9/11
Mammary_Gland 10/11
###Markdown
Save DE results
###Code
with open(output_folder+'/DE_tissue_droplet.pickle', 'wb') as handle:
pickle.dump(DE_result_MAST, handle)
pickle.dump(analysis_list, handle)
pickle.dump(analysis_info, handle)
###Output
_____no_output_____ |
Unique Paths/Unique_Paths.ipynb | ###Markdown
A robot is located at the top-left corner of a m x n grid (marked 'Start' in the diagram below).The robot can only move either down or right at any point in time. The robot is trying to reach the bottom-right corner of the grid (marked 'Finish' in the diagram below).From [Leetcode](https://leetcode.com/problems/unique-paths/).
###Code
class Solution:
def uniquePaths(self, m, n):
grid = [[0 for i in range(n)] for j in range(m)]
return self.findPaths(grid, 0, 0)
def findPaths(self, grid, i, j):
if i == len(grid[0])-1 and j==len(grid)-1:
return 1
if i > len(grid[0])-1 or j > len(grid)-1:
return 0
return self.findPaths(grid, i+1,j) + self.findPaths(grid, i, j+1)
Solution = Solution()
Solution.uniquePaths(3,12)
###Output
_____no_output_____
###Markdown
Second Solution
###Code
class Solution:
def uniquePaths(self, m, n):
return self.findPaths(m, n)
def findPaths(self, m, n):
Mat = [[0 for i in range(n)] for j in range(m)]
for i in range(m):
for j in range(n):
if (i == 0 or j == 0):
#print(i,j)
Mat[i][j] = 1
else:
Mat[i][j] = Mat[i-1][j] + Mat[i][j-1]
return Mat[m-1][n-1]
Solution = Solution()
Solution.uniquePaths(33,42)
###Output
_____no_output_____ |
examples/ASE_simulation.ipynb | ###Markdown
Simulation generator for XCloneInstall xclone for using the `CNV_simulator` function in `xclone/simulator.py`
###Code
import numpy as np
from scipy.sparse import load_npz
from xclone.simulator import CNV_ASE_simulator
dat_dir = "./"
###Output
_____no_output_____
###Markdown
Load G&T data
###Code
# DP_RNA = load_npz(dat_dir + "/scRNA_DP.npz").toarray()
# DP_DNA = load_npz(dat_dir + "/scDNA_DP.npz").toarray()
DP_RNA = load_npz(dat_dir + "../data/G_T/scRNA/block_DP.npz").toarray()
DP_DNA = load_npz(dat_dir + "../data/G_T/scDNA/block_DP.npz").toarray()
print(DP_RNA.shape, DP_DNA.shape)
print(DP_DNA)
print(DP_RNA)
DP_RNA[DP_RNA != DP_RNA] = 0
DP_DNA[DP_DNA != DP_DNA] = 0
###Output
_____no_output_____
###Markdown
Example simulation
###Code
n_clone = 4
n_block = 100
tau_fix = np.array([[0, 1], [1, 2], [1, 1], [2, 1], [1, 0]])
T_mat_rand = np.random.choice(range(5), size=(n_block, n_clone))
simu_dat = CNV_ASE_simulator(tau_fix, T_mat_rand,
DP_RNA[:n_block, :],
DP_DNA[:n_block, :],
share_theta=False,
n_cell_DNA=150, n_cell_RNA=200,
random_seed=2)
simu_dat.keys()
print(np.unique(simu_dat['I_RNA'], return_counts=True),
np.unique(simu_dat['I_DNA'], return_counts=True))
###Output
(array([0, 1, 2, 3]), array([45, 42, 57, 56])) (array([0, 1, 2, 3]), array([37, 44, 29, 40]))
###Markdown
Run Vireo
###Code
import vireoSNP
from scipy import sparse
# np.random.seed(2)
theta_prior = np.array([[0.01, 1], [1, 2], [1, 1], [2, 1], [1, 0.01]])
AD = sparse.csr_matrix(np.append(simu_dat['AD_RNA'],
simu_dat['AD_DNA'], axis=1))
DP = sparse.csr_matrix(np.append(simu_dat['DP_RNA'],
simu_dat['DP_DNA'], axis=1))
### with multiple initializations
## CNV block specific allelic ratio can be used by add ASE_mode=True
res = vireoSNP.vireo_flock(AD, DP, n_donor=4, learn_GT=True,
n_extra_donor=0, #ASE_mode=True,
theta_prior=theta_prior, learn_theta=True,
n_init=50, check_doublet=False, random_seed=1)
## with single initialization
# res = vireoSNP.vireo_flock(AD, DP, n_donor=4, learn_GT=True,
# theta_prior=theta_prior, learn_theta=True,
# check_doublet=False)#, ASE_mode=False)
print("Output donor size:", res['ID_prob'].sum(axis=0))
from sklearn.metrics import adjusted_rand_score
print(adjusted_rand_score(np.argmax(res['ID_prob'], axis=1)[:200], simu_dat['I_RNA']))
print(adjusted_rand_score(np.argmax(res['ID_prob'], axis=1)[200:], simu_dat['I_DNA']))
###Output
0.9455299170439057
1.0
###Markdown
Plot simulation results
###Code
def anno_heat(X, anno, **kwargs):
WeiZhu_colors = np.array(['#4796d7', '#f79e54', '#79a702', '#df5858', '#556cab',
'#de7a1f', '#ffda5c', '#4b595c', '#6ab186', '#bddbcf',
'#daad58', '#488a99', '#f79b78', '#ffba00'])
idx = np.argsort(np.dot(X, 2**np.arange(X.shape[1])) +
anno * 2**X.shape[1])
g = sns.clustermap(X[idx], cmap="GnBu", yticklabels=False,
col_cluster=False, row_cluster=False,
row_colors=WeiZhu_colors[anno][idx], **kwargs)
for label in np.unique(anno):
g.ax_col_dendrogram.bar(0, 0, color=WeiZhu_colors[label],
label=label, linewidth=0)
g.ax_col_dendrogram.legend(loc="center", ncol=6, title="True clone")
g.cax.set_position([.95, .2, .03, .45])
return g
import seaborn as sns
import matplotlib.pyplot as plt
im = anno_heat(res['ID_prob'][:200], np.array(simu_dat['I_RNA'], int))
im.ax_heatmap.set(xlabel='infered clones',
ylabel='200 cells',
title='scRNSA-seq: Adj Rand Index=%.3f'
%(adjusted_rand_score(np.argmax(res['ID_prob'], axis=1)[:200],
simu_dat['I_RNA'])))
import seaborn as sns
import matplotlib.pyplot as plt
im = anno_heat(res['ID_prob'][200:], np.array(simu_dat['I_DNA'], int))
im.ax_heatmap.set(xlabel='infered clones',
ylabel='150 cells',
title='scDNSA-seq: Adj Rand Index=%.3f'
%(adjusted_rand_score(np.argmax(res['ID_prob'], axis=1)[200:],
simu_dat['I_DNA'])))
_theta = res['theta_shapes'][:, 0] / res['theta_shapes'].sum(axis=1)
_theta_clone = np.tensordot(res['GT_prob'], _theta, axes=[1,0])
im = sns.clustermap(_theta_clone, col_cluster=False)
im.ax_heatmap.set(xlabel='infered clones',
ylabel='CNV blocks',
title='Allele ratio')
im = sns.clustermap(np.log10(simu_dat['DP_RNA'] + 1),
col_cluster=False, cmap='GnBu')
im.ax_heatmap.set(xlabel='200 cells',
ylabel='CNV blocks',
title='log10(DP) scRNA-seq')
im = sns.clustermap(np.log10(simu_dat['DP_DNA'] + 1),
col_cluster=False, cmap='GnBu')
im.ax_heatmap.set(xlabel='150 cells',
ylabel='CNV blocks',
title='log10(DP) scDNA-seq')
plt.plot(-res['LB_list'])
plt.show()
###Output
_____no_output_____ |
linear regression.ipynb | ###Markdown
Linear regression선형회귀 - 종속 변수 y와 한개 이상의 독립변수 X와의 선형관계를 모델링하는 방법론
###Code
import sympy
import numpy
from matplotlib import pyplot
%matplotlib inline
sympy.init_printing()
w = sympy.Symbol('w', real=True)
f = w**2 + 3*w - 5
f
sympy.plotting.plot(f);
fprime = f.diff(w)
fprime
sympy.plotting.plot(fprime);
sympy.solve(fprime, w)
###Output
_____no_output_____
###Markdown
Gradient Descent
###Code
fpnum = sympy.lambdify(w, fprime)
type(fpnum)
w = 10.0
for _ in range(1000):
w = w - fpnum(w) *0.01
print(w)
###Output
-1.4999999806458753
###Markdown
데이터셋 만들어보기
###Code
x_data = numpy.linspace(-5, 5, 100) #(시작점, 끝점, 점의 개수)
w_true = 2
b_true = 20
#y= wx+b 선분에 noise를 추가해서 출력(radom.normal 없으면 그냥 직선)
y_data = w_true*x_data + b_true + numpy.random.normal(size=len(x_data))
pyplot.scatter(x_data, y_data);
x_data.shape
y_data.shape
w, b, x, y = sympy.symbols('w b x y')
cost_function = (w*x + b - y)**2
cost_function
grad_b = sympy.lambdify([w,b,x,y], cost_function.diff(b), 'numpy')
grad_w = sympy.lambdify([w,b,x,y], cost_function.diff(w), 'numpy')
w = 0
b = 0
for _ in range(1000):
descent_b = numpy.sum(grad_b(w,b,x_data,y_data))/len(x_data)
descent_w = numpy.sum(grad_w(w,b,x_data,y_data))/len(x_data)
w = w - descent_w*0.01
b = b - descent_b*0.01
print(w)
print(b)
pyplot.scatter(x_data,y_data)
pyplot.plot(x_data, w*x_data + b, 'r');
from IPython.display import YouTubeVideo
YouTubeVideo('gGOzHVUQCw0')
from urllib.request import urlretrieve
URL = 'http://go.gwu.edu/engcomp1data5?accessType=DOWNLOAD'
urlretrieve(URL, 'land_global_temperature_anomaly-1880-2016.csv')
import numpy
fname = '/content/land_global_temperature_anomaly-1880-2016.csv'
year, temp_anomaly = numpy.loadtxt(fname, delimiter=',', skiprows=5, unpack=True)
from matplotlib import pyplot
%matplotlib inline
pyplot.plot(year, temp_anomaly);
pyplot.rc('font', family='serif', size='18')
#You can set the size of the figure by doing:
pyplot.figure(figsize=(10,5))
#Plotting
pyplot.plot(year, temp_anomaly, color='#2929a3', linestyle='-', linewidth=1)
pyplot.title('Land global temperature anomalies. \n')
pyplot.xlabel('Year')
pyplot.ylabel('Land temperature anomaly [°C]')
pyplot.grid();
w = numpy.sum(temp_anomaly*(year - year.mean())) / numpy.sum(year*(year - year.mean()))
b = a_0 = temp_anomaly.mean() - w*year.mean()
print(w)
print(b)
reg = b + w * year
pyplot.figure(figsize=(10, 5))
pyplot.plot(year, temp_anomaly, color='#2929a3', linestyle='-', linewidth=1, alpha=0.5)
pyplot.plot(year, reg, 'k--', linewidth=2, label='Linear regression')
pyplot.xlabel('Year')
pyplot.ylabel('Land temperature anomaly [°C]')
pyplot.legend(loc='best', fontsize=15)
pyplot.grid();
###Output
_____no_output_____
###Markdown
Linear regression**Supervised learning: regression** **Goal:** Fit an ols linear regression model to randomly generated $(X, y)$ data. **Why ols?** In a higher dimensional space, a regression hyperplane aims at summarizing the true relationship between the response, $y$, and the features, $X$, and disregard any random noise that is not part of that relationship. In math terms: $y \approx X \beta + \epsilon$. There are different routes one could take to fit the regression hyperplane into the data. Eyeballing being one of them. Ordinary least squares (ols) minimizes the euclidean distance between the fitted response, $\hat{y}$, and the observed response, $y$. So essentially the difference between what our model says ($\hat{y} \equiv X \beta$) and what we see in the data. This difference $\hat{y}_i - y_i$ $\forall$ $i$ is the error or "residual" for each observation $i$. And what ols minimizes is the sum of squares of all residuals (SSR). Again in math: we pick the model parameters $\beta^*$ that minimize the SSR $\|X \beta - y\|_2^2$. As it turns out, minimizing SSR is equivalent to fitting the regression hyperplane via maximum likelihood. I won't bother to prove that here. Plenty of people smarter than me have already done that out there. Just google it. Resources are plenty. **Loads.**
###Code
import numpy as np
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d.art3d import Line3DCollection
%matplotlib inline
###Output
_____no_output_____
###Markdown
**Create & plot data.** For this example we create a random dataset of $n=$ 500 observations and $p=$ 2 features, $x_0$ and $x_1$. This leads to a feature matrix $X \in \mathbb{R}^{nxp}$ and a response vector $y \in \mathbb{R}^{n}$.In terms of visualization here we show a 2D and a 3D representation of the data. This should drive home the fact that the human brain has a harder time making sense of higher dimensions. Just pick one $(x_{0i},x_{1i})$ data point and see in which of the graphs you can tell the corresponding $y$ value. (Of couse I'm making your life easy with the colors). :)
###Code
# Call on numpy's pseudo random number generator
rng = np.random.RandomState(42)
# Training data
# Create feature matrix
X = rng.randn(500, 2)
x0, x1 = X[:, 0], X[:, 1]
# Create response vector with y varying along the [-2.5, .8]X plane
y = np.dot(X, [-2.5, .8]) + 2 * rng.randn(X.shape[0])
# Test data
X_test = rng.randn(100, 2)
x0_test, x1_test = X_test[:, 0], X_test[:, 1]
# Viz 2D scatterplot
sc = plt.scatter(x0, x1, c=y, s=30, alpha=.5, cmap='YlGnBu')
plt.xlabel('Feature 0')
plt.ylabel('Feature 1')
plt.title('Training Data')
plt.colorbar(sc)
# Viz 3D space
fig = plt.figure(figsize=(6, 6))
# 3D projection
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x0, x1, y, c=y, s=40,cmap='YlGnBu')
# Add initial scatterplot to x0, x1 flat space
ax.scatter(x0, x1, -8 + np.zeros(X.shape[0]), c=y, s=10, cmap='YlGnBu', alpha=.2)
ax.set(xlabel='Feature 0', ylabel='Feature 1', zlabel='Response')
ax.view_init(13, -65)
# Add connecting lines
pts = np.hstack([X, y[:, None]]).reshape(-1, 1, 3)
segs = np.hstack([pts, pts])
segs[:, 0, 2] = -8
ax.add_collection3d(Line3DCollection(segs, colors='gray', alpha=.1))
###Output
_____no_output_____
###Markdown
**Train model & visualize fit.**
###Code
# Create & train model
ols = LinearRegression()
ols.fit(X, y)
# Viz scatterplot
fig, ax = plt.subplots()
pts = ax.scatter(x0, x1, c=y, s=50, cmap='YlGnBu', zorder=2)
# Compute model color mesh
xx0, xx1 = np.meshgrid(np.linspace(-4, 4),
np.linspace(-3, 4))
X_fitted = np.vstack([xx0.ravel(), xx1.ravel()]).T
y_fitted = ols.predict(X_fitted)
yy = y_fitted.reshape(xx0.shape)
# Plot hyperplane
ax.pcolorfast([-4, 4], [-3, 4], yy, alpha=0.5, cmap='YlGnBu', norm=pts.norm, zorder=1)
ax.set(xlabel='Feature 0', ylabel='Feature 1')
# Plot 3D
fig = plt.figure(figsize=(16, 6))
fig.subplots_adjust(left=0, right=0.6, wspace=0.1)
# fig1
ax = fig.add_subplot(121, projection='3d')
ax.scatter(x0, x1, y, c=y, s=40,cmap='YlGnBu')
ax.plot_surface(xx0, xx1, yy, color='b', alpha=.4)
ax.view_init(10, 90)
ax.set(xlabel='Feature 0', ylabel='Feature 1')
# fig2
ax = fig.add_subplot(122, projection='3d')
ax.scatter(x0, x1, y, c=y, s=40,cmap='YlGnBu')
ax.plot_surface(xx0, xx1, yy, color='c', alpha=.4)
ax.view_init(13, -65)
ax.set(xlabel='Feature 0', ylabel='Feature 1')
###Output
_____no_output_____
###Markdown
**Predict on test data & visualize results.** With the trained model at hand we proceed to see what response it predicts in a new data sample.
###Code
# Predict on test data
y_predicted = ols.predict(X_test)
# Plot model prediction
fig, ax = plt.subplots(1, 2, figsize=(16, 6), sharex=True, sharey=True)
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
# fig1
ax[0].scatter(x0_test, x1_test, c='gray', s=150, alpha=.7)
ax[0].axis([-3, 3, -3, 3])
ax[0].set(title='Unkown Data')
# fig2
ax[1].scatter(x0_test, x1_test, c=y_predicted, s=150, alpha=.7, cmap='YlGnBu')
ax[1].set(title='Predicted Response')
###Output
_____no_output_____
###Markdown
Linear Regression > Linear Regression supposes that there's a linear relation between inputs and outputs (targets).This notebook shows how to train a linear regression model in PyTorch in two ways:- [from scratch](1.-Linear-Regression-from-scratch), functions are built manually.- [using PyTorch built-ins function](2.-Linear-Regression-using-PyTorch-built-ins). 1. Linear Regression from scratchThe figure below presents the workflow of this part.- [x] Convert inputs & targets to tensors: convert data (*inputs* & *targets*) from numpy arrays to tensors.- [x] Initialize parameters: identify the number of samples, of features and of targets. Initialize *weights* and *bias* to predict target. Theses parameters will be optimized in training process.- [x] Define functions: create *hypothesis function* (model) to predict target from input, and *cost function* (loss function) to compute the difference between the prediction and the target.- [x] Train model: find the *optimal values* of the parameters (weights & bias) by using gradient descent algorithm. Make sure **reset gradients to zero** before the next iteration.- [x] Predict: using optimal parameters to predict target from a given input. Import libraries
###Code
import numpy as np
import torch
###Output
_____no_output_____
###Markdown
1.1. Prepare dataConverting inputs & targets to tensors.
###Code
# inputs
inputs = np.array([[73, 67, 43],
[91, 88, 64],
[87, 134, 58],
[102, 43, 37],
[69, 96, 70]], dtype='float32')
# targets
targets = np.array([[56, 70],
[81, 101],
[119, 133],
[22, 37],
[103, 119]], dtype='float32')
# convert inputs and targets to tensors
X = torch.from_numpy(inputs)
Y = torch.from_numpy(targets)
###Output
_____no_output_____
###Markdown
1.2. Initialize parameters
###Code
# get number of samples (m) and of features (n)
m, n = X.shape
print('number of samples: %s' % m)
print('number of features: %s' % n)
# get number of outputs (a)
_, a = Y.shape
print('number of outputs: %s' % a)
# initialize parameters
W = torch.randn(a, n, requires_grad=True) # weights
b = torch.randn(a, requires_grad=True) # bias
###Output
_____no_output_____
###Markdown
1.3. Define functions 1.3.1. Hypothesis function / Model
###Code
def model(X, W, b):
Y_hat = X @ W.t() + b
return Y_hat
###Output
_____no_output_____
###Markdown
1.3.2. Cost function / Loss function
###Code
def cost_fn(Y_hat, Y):
diff = Y_hat - Y
return torch.sum(diff * diff)/diff.numel()
###Output
_____no_output_____
###Markdown
1.4. Train modelThe algorithm Gradient Descent repeats the process of adjusting the weights and biases using the gradients multiple times to reduce the loss.
###Code
epochs = 100 # define number of iteration
lr = 1e-5 # learning rate
for i in range(epochs):
Y_hat = model(X, W, b)
cost = cost_fn(Y_hat, Y)
cost.backward()
with torch.no_grad():
W -= W.grad * lr
b -= b.grad * lr
W.grad.zero_()
b.grad.zero_()
###Output
/home/tuanva/.local/lib/python3.8/site-packages/torch/autograd/__init__.py:130: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
Variable._execution_engine.run_backward(
###Markdown
1.5. Predict
###Code
x = torch.tensor([[75, 63, 44.]])
y_hat = model(x, W, b)
print(y_hat.data)
###Output
tensor([[52.3504, 76.2581]])
###Markdown
2. Linear Regression using PyTorch built-insThe figure below presents the workflow of this part.- [x] Convert inputs & targets to tensors: convert data (*inputs* & *targets*) from numpy arrays to tensors. **Make sure** that numpy arrays are in data type `float32`.- [x] Define dataset & dataloader: - dataset are tuples of inputs & targets. - dataloader shuffles the dataset and divides a dataset into batches.- [x] Define functions: - identify the number of features and of targets, set model is a linear function. - set cost function is a mean squared loss function.- [x] Define optimizer: identifies the algorithm using to adjust model parameters. Set optimzer to use stochastic gradient descent algorithm.- [x] Train model: find the *optimal values* of model parameters by repeating the process of optimizing. **Make sure** reset gradients to zero before the next iteration.- [x] Predict: using optimal parameters to predict target from a given input. Import libraries
###Code
import torch.nn as nn
from torch.utils.data import TensorDataset
from torch.utils.data import DataLoader
import torch.nn.functional as F
###Output
_____no_output_____
###Markdown
2.1 Prepare data Convert inputs & targets to tensorsMake sure numpy arrays are in data type `float32`.
###Code
# inputs
inputs = np.array([[73, 67, 43],
[91, 88, 64],
[87, 134, 58],
[102, 43, 37],
[69, 96, 70],
[74, 66, 43],
[91, 87, 65],
[88, 134, 59],
[101, 44, 37],
[68, 96, 71],
[73, 66, 44],
[92, 87, 64],
[87, 135, 57],
[103, 43, 36],
[68, 97, 70]], dtype='float32')
# targets
targets = np.array([[56, 70],
[81, 101],
[119, 133],
[22, 37],
[103, 119],
[57, 69],
[80, 102],
[118, 132],
[21, 38],
[104, 118],
[57, 69],
[82, 100],
[118, 134],
[20, 38],
[102, 120]], dtype='float32')
# convert to tensors
X = torch.from_numpy(inputs)
Y = torch.from_numpy(targets)
###Output
_____no_output_____
###Markdown
Define dataset & data loader
###Code
# define dataset
dataset = TensorDataset(X, Y)
# define data loader
batch_size = 5
dataloader = DataLoader(dataset, batch_size, shuffle=True)
for batch in dataloader:
xs, ys = batch
print(xs.shape)
print(xs.data)
print('\n')
print(ys.shape)
print(ys.data)
break;
###Output
torch.Size([5, 3])
tensor([[101., 44., 37.],
[ 73., 67., 43.],
[ 87., 134., 58.],
[ 74., 66., 43.],
[103., 43., 36.]])
torch.Size([5, 2])
tensor([[ 21., 38.],
[ 56., 70.],
[119., 133.],
[ 57., 69.],
[ 20., 38.]])
###Markdown
2.2 Define functions 2.2.1 Hypothesis function / Model
###Code
# get number of samples (m) and of features (n)
m, n = X.shape
# get number of outputs
_, a = Y.shape
# define hypothesis function
model = nn.Linear(n, a)
print(model.weight)
print(model.bias)
print(list(model.parameters()))
###Output
Parameter containing:
tensor([[ 0.5617, -0.2088, -0.0547],
[ 0.1231, -0.4818, -0.2580]], requires_grad=True)
Parameter containing:
tensor([-0.2785, 0.3813], requires_grad=True)
[Parameter containing:
tensor([[ 0.5617, -0.2088, -0.0547],
[ 0.1231, -0.4818, -0.2580]], requires_grad=True), Parameter containing:
tensor([-0.2785, 0.3813], requires_grad=True)]
###Markdown
2.2.2 Cost function / Loss function
###Code
cost_fn = F.mse_loss
###Output
_____no_output_____
###Markdown
2.3 Define optimizerOptimizer identifies the algorithm using to adjust model parameters.
###Code
opt = torch.optim.SGD(model.parameters(), lr=1e-5) # use the algorithm stochastic gradient descent
###Output
_____no_output_____
###Markdown
2.4 Train model
###Code
def fit(epochs, model, cost_fn, opt, dataloader):
for epoch in range(epochs):
for xs, ys in dataloader:
ys_hat = model(xs) # predict
cost = cost_fn(ys_hat, ys) # compute cost
cost.backward() # compute gradients
opt.step() # adjust model parameters
opt.zero_grad() # reset gradients to 0
if (epoch+1) % 10 == 0:
print('epoch {}/{}, cost: {:.4f}'.format(epoch+1, epochs, cost.item()))
fit(100, model, cost_fn, opt, dataloader)
###Output
epoch 10/100, cost: 1126.6487
epoch 20/100, cost: 309.7225
epoch 30/100, cost: 412.5945
epoch 40/100, cost: 227.8946
epoch 50/100, cost: 187.1651
epoch 60/100, cost: 181.5051
epoch 70/100, cost: 6.7161
epoch 80/100, cost: 75.3690
epoch 90/100, cost: 54.0960
epoch 100/100, cost: 63.4944
###Markdown
2.5 Predict
###Code
x = torch.tensor([[75, 63, 44.]])
y_hat = model(x)
print(y_hat.data)
###Output
tensor([[55.3490, 69.0226]])
###Markdown
$Ax = y$
###Code
v=linalg.lstsq(A,y)[0]
linalg.lstsq(A,y)[0]
plot(x,dot(A,v)),("r")
scatter(x,y)
###Output
_____no_output_____
###Markdown
Linear modelslinear relationship between dependent and independent varinc/dec in one var leads to inc/dec in otherheigh vs weightsize of land vs priceLow MSE = better modely = mx + c, m = slope, c = interceptbut how to find m, c ?
###Code
#importing the libraries
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.metrics import mean_squared_error as mse
#creating a sample data
experience = [1.2, 1.5, 1.9, 2.2, 2.4, 2.5, 2.8, 3.1, 3.3, 3.7, 4.2, 4.4]
salary = [1.7, 2.4, 2.3, 3.1, 3.7, 4.2, 4.4, 6.1, 5.4, 5.7, 6.4, 6.2]
data = pd.DataFrame({ 'salary' : salary, 'experience' :experience })
data.shape
data.head()
#plotting the data
plt.scatter(data.experience, data.salary, color ='red', label= 'data points')
plt.xlabel('experience')
plt.ylabel('salary')
plt.legend()
###Output
_____no_output_____
###Markdown
Cost function
###Code
# trying to plot one line ,lets keep slope and intercept constant
m = 0.1
c = 1.1
line1 = []
for i in range(len(data)):
line1.append(data.experience[i] * m + c)
plt.scatter(data.experience, data.salary, color ='red')
plt.plot(data.experience, line1, color ='black', label= 'line')
plt.xlabel('experience')
plt.ylabel('salary')
plt.legend()
MSE = mse(data.experience, line1)
print(MSE)
def Error(m, data):
c = 1.1
salary = []
for i in range(len(data.experience)):
y = data.experience[i] * m + c
salary.append(y)
MSE = mse(data.experience, salary)
return MSE
slope = [i/100 for i in range(0, 150)]
test_mse = []
for i in slope:
temp= Error(m = i, data = data)
test_mse.append(temp)
#plotting the mse agansit the slope values,by putting the intercet as constant
plt.plot(slope, test_mse, color ='black', label= 'line')
plt.xlabel('slope')
plt.ylabel('mse error')
plt.legend()
###Output
_____no_output_____
###Markdown
Gradient descent in linear regOptimisation techminimises error generatedworks iterativelycal error at each iterationoptimizes model parametersUntil the model converges to mim coststeps:Initialize the parametersGenerate predictionsCalculate cost RandomlyUpdate parametersRepeat the above steps until convergencehow to fnd the global minima random initialization adjust the learning rate (where it avoids local minima) AssumptionsLinear relationships: if not linear we cant use, so we trnsform them no correlation of error terms: (correlated, change in one var impat other var) constant variance of error terms: trend in the varianceno correlation among independent var: we eliminate the term ; VIF = 1/( 1- R^2) VIF helps us addressing multicolinearityerrors normally distributed: standard qq chart, normally distributed = no outliers Linear regression
###Code
#importing the lib
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
#importing the data
#we are using the bigmart sales dataset
df = pd.read_csv('data_knn_regression_cleaned_bigmart_sales.csv')
df.shape
df.head()
#checking for null values
df.isnull().sum()
#checking the datatypes
df.dtypes
#all categorical var are converted into numeric =encoding
# seperating independent and dependent var
x = df.drop(['Item_Outlet_Sales'], axis=1)
y = df['Item_Outlet_Sales']
x.shape, y.shape
#splitting the data into test and train
from sklearn.model_selection import train_test_split as tts
x_train, x_test, y_train, y_test = tts(x, y, random_state=56)
x_train.shape, x_test.shape, y_train.shape, y_test.shape
#implementing the linear regression
from sklearn.linear_model import LinearRegression as LR
from sklearn.metrics import mean_absolute_error as mae
#creating instance of linear regression
lr = LR()
#fit the model
lr.fit(x_train, y_train)
#prediction oveer the train set and cal the error
train_predict = lr.predict(x_train)
k = mae(train_predict, y_train)
print('error wrt train set: ',k)
#prediction over the test set
test_predict = lr.predict(x_test)
k = mae(test_predict, y_test)
print('error wrt test set: ',k)
# check at the coefficents
lr.coef_
#plotting the coefficients
plt.figure(figsize =(8, 6), dpi = 120, facecolor = 'w', edgecolor = 'b' )
x = range(len(x_train.columns))
y = lr.coef_
plt.bar(x, y)
plt.xlabel('Variables')
plt.ylabel('Coefficients')
plt.title('Coefficients plot')
'''
here we can see that the model depends upon independent variables
too much, but these coeff are not suitable for interpretation because
these are not scaled
'''
###Output
_____no_output_____
###Markdown
checking assumptions of linear model
###Code
#arranging ans cal the residuals
residuals = pd.DataFrame({'fitted_values': y_test, 'predicted_values': test_predict})
residuals['residuals'] = residuals['fitted_values'] - residuals['predicted_values']
residuals.shape
#plotting the residual curve( is there constant var or homoscedastic)
plt.figure(figsize =(10,6), dpi=120, facecolor='w',edgecolor='b')
f = range(0,2131)
k = [0 for i in range(0, 2131)]
plt.scatter(f, residuals.residuals[:], label='residuals')
plt.plot(f, k, color = 'red', label = 'regression line')
plt.xlabel('fitted points')
plt.ylabel('residuals')
plt.ylim(-4000, 4000)
plt.title('Residuals points')
'''
Residual plot looks Homoscedastic , i.e; the variance of the
error across the data is nearly constant
'''
###Output
_____no_output_____
###Markdown
checking distribution of residualsIf residuals are not normally distributed it implies that the model varies in accuracy for different values of the predictor variable. This suggests that the relationship between predictor and outcome may not be linear and undermines a key assumption of the model.
###Code
#histogram for distribution
plt.figure(figsize=(10,6), dpi=120, facecolor='w', edgecolor='b')
plt.hist(residuals.residuals[:], bins = 150)
plt.xlabel('Error')
plt.ylabel('frequency')
plt.title('distribution of error terms')
'''
according to histogram , the distribution of error in nearly
normal, but there are some outliers on the higher end of the errors
'''
###Output
_____no_output_____
###Markdown
QQ plot is data normally distributed ?
###Code
#importing the QQ plot from the statsmodels
from statsmodels.graphics.gofplots import qqplot
#plotting the QQ plot
fig, ax = plt.subplots(figsize=(5, 5), dpi = 120)
qqplot(residuals.residuals[:], line ='s', ax =ax)
plt.xlabel('Residual Quantiles')
plt.ylabel('Ideal Scaled Quantiles')
plt.title('Checcking distributions of Residual Errors')
'''
QQ plot clearly verifies our findings from the histogram of
the residuals, the data is mostly normal in nature, but there
are some outliers on the higher end of the residuals
from the ACF plot, we can easily see that there is almost
negligible correlation between the error terms.
hence there is no automatation present in the data
'''
###Output
_____no_output_____
###Markdown
Variance Inflation Factor (VIF) checking for multi collinearity
###Code
# importing variance_inflation_factor function from the statsmodels
from statsmodels.stats.outliers_influence import variance_inflation_factor
from statsmodels.tools.tools import add_constant
#cal VIF for every col( only works for the not categorical var)
VIF = pd.Series([variance_inflation_factor(df.values, i) for i in range(data.shape[1])], index = data.columns)
'''
from this list , we clearly see that there happens to be no
independent variables over the value of 5, which means that there are
no features that exhibit the Multicollinearity in the dataset
these VIF works only for the continuous var
'''
VIF
###Output
_____no_output_____
###Markdown
Model Interoperability so far we have simply been predicting the values using the linear regression , but in order to interoret the model,the normalising of the data is essential
###Code
# creating instance of linear regression
lr = LR(normalize = True)
#fitting the model
lr.fit(x_train, y_train)
#prediction oveer the train set and cal the error
train_predict = lr.predict(x_train)
k = mae(train_predict, y_train)
print('error wrt train set: ',k)
#prediction over the test set
test_predict = lr.predict(x_test)
k = mae(test_predict, y_test)
print('error wrt test set: ',k)
# check at the coefficents
lr.coef_
#plotting the coefficients
plt.figure(figsize =(8, 6), dpi = 120, facecolor = 'w', edgecolor = 'b' )
x = range(len(x_train.columns))
y = lr.coef_
plt.bar(x, y)
plt.xlabel('Variables')
plt.ylabel('Coefficients')
plt.title('Normalised Coefficients plot')
'''
noe the coefficients we see are nomarlised and we
can easily make final inferences out of it
here we can see that there are lot of coefficients which are
near to zero and not significant.
so lets us tryremoving them and build th model again
'''
###Output
_____no_output_____
###Markdown
Creating new subsets of data
###Code
# seperating independent and dependent var
x = df.drop(['Item_Outlet_Sales'], axis=1)
y = df['Item_Outlet_Sales']
x.shape, y.shape
#arranging coeff with features
Coefficients = pd.DataFrame({'Variable': x.columns,'coefficient': lr.coef_})
Coefficients.shape
Coefficients.head()
#choosing var with significance greater than 0.5 (filtering significant feature)
sig_var = Coefficients[Coefficients.coefficient > 0.5]
sig_var
#extracting the significant subset to independent var
subset = data[sig_var['Variable'].value]
subset.head()
#splitting the data into test and train
from sklearn.model_selection import train_test_split as tts
x_train, x_test, y_train, y_test = tts(x, y, random_state=56)
x_train.shape, x_test.shape, y_train.shape, y_test.shape
#implementing the linear regression
from sklearn.linear_model import LinearRegression as LR
from sklearn.metrics import mean_absolute_error as mae
#creating instance of linear regression
lr = LR()
#fit the model
lr.fit(x_train, y_train)
#prediction oveer the train set and cal the error
train_predict = lr.predict(x_train)
k = mae(train_predict, y_train)
print('error wrt train set: ',k)
#prediction over the test set
test_predict = lr.predict(x_test)
k = mae(test_predict, y_test)
print('error wrt test set: ',k)
# check at the coefficents
lr.coef_
#plotting the coefficients
plt.figure(figsize =(8, 6), dpi = 120, facecolor = 'w', edgecolor = 'b' )
x = range(len(x_train.columns))
y = lr.coef_
plt.bar(x, y)
plt.xlabel('Variables')
plt.ylabel('Coefficients')
plt.title('Coefficients plot')
###Output
_____no_output_____
###Markdown
$Av= y $
###Code
v = linalg.lstsq(A, y)[0]
v
plot(x, dot(A, v), "r")
scatter(x, y)
y = 3*x**2 + 15*x+200+10*error
scatter(x, y)
A= hstack([x**2, x, ones_like(x)])
v = linalg.lstsq(A, y)[0]
v
plot(x, dot(A, v), "r")
scatter(x, y)
###Output
_____no_output_____
###Markdown
Read the Data
###Code
# read the data from the tab separated file
df = pd.read_csv('data/voting_data_anonymized.tsv', sep='\t')
original_columns = list(df.columns)
df.head()
###Output
_____no_output_____
###Markdown
Impute and Augment the Data
###Code
# fill missing property values with mean property value
df['prop value'].fillna(int(df['prop value'].mean()), inplace=True)
# drop all voters who were not registered in 2021
df.dropna(subset=['voted 2021'], axis=0, inplace=True)
def convert_voted_column_to_int(df, column_name):
'''
INPUT:
df - pandas DataFrame
column_name - the name of the column to convert
OUTPUT:
df - Pandas DataFrame with column converted.
The 'voted' columns contain three values. Convert each row of column [column_name] it numeric values.
NaN -> 0
False -> 0
True -> 1
'''
df[column_name] = df[column_name] * 1
df[column_name].fillna(0, inplace=True)
return df
#convert all voted columns to numeric
for i in range(2010, 2022):
column_name = f'voted {i}'
print(f'converting column to int: {column_name}')
convert_voted_column_to_int(df, column_name)
# create dummy columns for category columns, save original columns
category_columns = list(df.select_dtypes(include=['object']).columns)
df_category_columns = pd.DataFrame()
print(category_columns)
for i in category_columns:
print(f'creating dummies for category column: {i}')
# df_category_columns = pd.concat([df_category_columns, df[i]])
df_category_columns[i] = df[i]
df = pd.concat([df.drop(i, axis=1),
pd.get_dummies(df[i], prefix=i, prefix_sep='_')],
axis=1)
# augment data with all triplet combinations of the last four years indicating
# whether the voter voted in all three of those years
years = [2016, 2017, 2018, 2019]
L=3
for subset in itertools.combinations(years, L):
column_name = f'voted {",".join([str(x) for x in subset])}'
df[column_name] = 1
print(f'adding: {column_name}')
for i in subset:
df[column_name] = df[column_name] & df[f'voted {i}']
df=df.copy()
df.head()
# show list of columns after imputing and augmentation
df.columns
###Output
_____no_output_____
###Markdown
Fit a linear regression model to the data using a Theil-Sen estimator
###Code
print('data shape:', df.shape)
# we'll use the 2020 voting data to fit our model
X = df
y=df['voted 2020']
# split into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.25, random_state=42)
# make a copy of the train and test sets before dropping columns
df_train = X_train.copy()
df_test = X_test.copy()
# drop the 2020 and 2021 columns from the data set
drop_columns = ['voted 2020', 'voted 2021']
X_train.drop(columns=drop_columns, inplace=True)
X_test.drop(columns=drop_columns, inplace=True)
print('X shape:', X_train.shape)
# use the Theil-Sen estimator to fit a linear model
# https://en.wikipedia.org/wiki/Theil%E2%80%93Sen_estimator
lm_model = TheilSenRegressor(random_state=42, max_iter=500, fit_intercept=True, verbose=True, max_subpopulation=2000)
lm_pipeline = make_pipeline(MinMaxScaler(), PolynomialFeatures(1), lm_model)
lm_pipeline.fit(X_train, y_train)
# predict
y_test_preds = lm_pipeline.predict(X_test)
y_train_preds = lm_pipeline.predict(X_train)
# score
test_score = r2_score(y_test, y_test_preds)
train_score = r2_score(y_train, y_train_preds)
print(train_score, test_score)
###Output
data shape: (11278, 46)
X shape: (8458, 44)
Breakdown point: 0.01487588735312051
Number of samples: 8458
Tolerable outliers: 125
Number of subpopulations: 2000
###Markdown
What does the output of the linear regression model prediction look like?
###Code
#create a scatter plot to show distribution of predicted values
y_all = np.concatenate((y_train_preds, y_test_preds), axis=0)
df_y_all = pd.DataFrame(y_all, columns=['prediction'])
df_y_all['x'] = df.index
ch = sns.color_palette("tab10").as_hex()
ch1 = ch[0]
ch2 = ch[3]
pal = sns.color_palette(f'blend:{ch1},{ch1},{ch2},{ch2},{ch2},{ch2},{ch2}', as_cmap=True)
plot = df_y_all[:1000].plot.scatter(x='x', y='prediction', marker='+', c='prediction', colormap=pal)
###Output
_____no_output_____
###Markdown
How does each column contribute to the voter propensity prediction?
###Code
def get_model_coefficients(coefficients, variable_names, drop_variable_names=None):
'''
INPUT:
coefficients - the coefficients of the linear model
variable_names - names of variables corresponding to coefficients
drop_variable_names - drop variables with these names (useful for removing poly offset)
OUTPUT:
df_c - a dataframe holding the coefficient and abs(estimate)
Provides a dataframe that can be used to understand the most influential coefficients
in a linear model by providing the coefficient estimates along with the name of the
variable attached to the coefficient.
'''
df_c = pd.DataFrame()
df_c['column'] = variable_names
df_c['weight'] = lm_model.coef_
df_c['abs(weight)'] = np.abs(lm_model.coef_)
for i in drop_variable_names:
df_c = df_c.drop(labels=list(df_c.index[df_c['column']==i]), axis=0)
df_c = df_c.sort_values('abs(weight)', ascending=False)
return df_c
#display coefficients of fitted model
df_c = get_model_coefficients(lm_model.coef_, ['poly_offset'] + list(X_train.columns), drop_variable_names=['poly_offset'])
pal = sns.color_palette("pastel").as_hex()
df_c.style.bar(subset=['weight'], color=[pal[3], pal[2]])
# calculate coefficient for 2020
coef_0 = df_c['abs(weight)'].iloc[0]
coef_1 = df_c['abs(weight)'].iloc[1]
coef_2020 = coef_0 + (coef_0 - coef_1)
print('interpolated 2020 coefficient:',coef_2020)
###Output
interpolated 2020 coefficient: 0.32407532432739317
###Markdown
2021 PredictionPut the train and test sets back together, append prediction results, and add the original categorical columns.Calculate the 2021 voter propensity prediction, and normalize it.
###Code
def concat_unindexed_column(df, c, name):
'''
INPUT:
df - DataFrame to add column to
c - unindexed column
name - name for unindexed column
OUTPUT:
(df) with (c) column added as (name), retaining df indexes
Concats an unindexed column (c) to a DataFrame (df), while maintinging
indexes in (df). The new column will use the (name) provided.
'''
df = df.reset_index()
df = pd.concat([df, pd.DataFrame(c, columns=[name])], axis=1)
df.set_index('index', inplace=True)
return df
# add predictions to train and test sets
df_train2 = concat_unindexed_column(df_train, y_train_preds, 'pred 2020')
df_test2 = concat_unindexed_column(df_test, y_test_preds, 'pred 2020')
# concat tran and test sets
df_all = pd.concat([df_train2, df_test2])
# add original category columns
df_all = pd.concat([df_all, df_category_columns], axis=1)
# calculate the 2021 precdictions
df_all['pred 2021'] = df_all['pred 2020'] + df_all['voted 2020'] * coef_2020
# normalize the 2021 predicion column
df_all['pred 2021'] -= df_all['pred 2021'].min()
df_all['pred 2021'] /= df_all['pred 2021'].max()
df_all.head()
# show voter list with original columns, sorted by 2021 predictions from most likely to least likely voters
df_all.sort_values(['pred 2021'], ascending=False)[['pred 2021']+original_columns]
###Output
_____no_output_____
###Markdown
Testing the ModelTest model against traditional methods of targetting voters: * Random picks * Prioritize voters who voted in the last 3 elections * Prioritize voters who voted in the last 2 elections * Prioritize voters who voted in the last election * Use the top predictions from the linear regression modelCreate 'mailing list' containing a third of the voters, and calculatethe accuracy of reaching voters who voted in 2021.
###Code
# show total number of voters, the number of targetted voters and number of 2021 voters
n_records = df_all.shape[0]
n_picks = n_records // 3
n_voted_2021 = df_all.loc[(df_all['voted 2021']==1)].shape[0]
print(f'number of records: {n_records}')
print(f'number of votes: {n_voted_2021}')
print(f'number of picks: {n_picks}')
df_all['voted last 3'] = df_all['voted 2019'] & df_all['voted 2018'] & df_all['voted 2020']
df_all['voted last 2'] = df_all['voted 2018'] & df_all['voted 2020']
df_all['voted last 1'] = df_all['voted 2020']
# find number of correctly identified voters for all methids
n_random_correct = df_all[:n_picks]['voted 2021'].sum()
n_last_3_correct = df_all.sort_values(['voted last 3'], ascending=False)[:n_picks]['voted 2021'].sum()
n_last_2_correct = df_all.sort_values(['voted last 2'], ascending=False)[:n_picks]['voted 2021'].sum()
n_last_1_correct = df_all.sort_values(['voted last 1'], ascending=False)[:n_picks]['voted 2021'].sum()
n_pred_correct = df_all.sort_values(['pred 2021'], ascending=False)[:n_picks]['voted 2021'].sum()
def ratio_to_percent(numerator, denominator):
'''
INPUT:
numerator - the numerator of the ratio
denominator - the denominator of the ratio
OUTPUT:
ratio as percentage
Calculates the ratio, as a percentage: numerator/denominator*100
'''
return numerator/denominator*100
# create DataFrame with a table of results
df_results_master = pd.DataFrame({
'Method': ['Random', 'Voted Last\n3 Years', 'Voted Last\n2 Years', 'Voted\nLast Year', 'Linear\nRegression\nModel'],
'2021 Voters Reached_#': [n_random_correct,
n_last_3_correct,
n_last_2_correct,
n_last_1_correct,
n_pred_correct,
],
'2021 Voters Reached_%': [ratio_to_percent(n_random_correct, n_voted_2021),
ratio_to_percent(n_last_3_correct, n_voted_2021),
ratio_to_percent(n_last_2_correct, n_voted_2021),
ratio_to_percent(n_last_1_correct, n_voted_2021),
ratio_to_percent(n_pred_correct, n_voted_2021),
],
'Targeting Accuracy_%': [ratio_to_percent(n_random_correct, n_picks),
ratio_to_percent(n_last_3_correct, n_picks),
ratio_to_percent(n_last_2_correct, n_picks),
ratio_to_percent(n_last_1_correct, n_picks),
ratio_to_percent(n_pred_correct, n_picks),
]
})
# show results results of test
df_results = df_results_master.copy()
df_results.columns = pd.MultiIndex.from_tuples([tuple(c.split("_")) if ('_' in c) else tuple([c,' ']) for c in df_results.columns])
df_results_styler = df_results.style.set_properties(**{'text-align': 'right'})
df_results_styler
###Output
_____no_output_____
###Markdown
How does the model's targeting accuracy compare to traditional methods?
###Code
# plot targeting accuracy
df_results = df_results_master.copy()
sns.set_theme()
sns.set(style='whitegrid', font="Rockwell")
palette = sns.dark_palette("#69d", reverse=True, as_cmap=True)
ax = sns.barplot(x='Method', y='Targeting Accuracy_%', data=df_results, edgecolor='#3a4e5c', palette='Blues_d')
ax.set_xlabel('Prediction Method')
ax.set_ylabel('Accuracy (%)')
ax.set_title('Targeting Accuracy')
plt.show()
###Output
_____no_output_____
###Markdown
What percentage of voting voters are reached by each method?
###Code
#plot 2021 voter reach
df_results = df_results_master.copy()
df_results['total'] = 100
sns.set_theme()
sns.set(style='whitegrid', font="Rockwell")
ax = sns.barplot(x = 'Method', y = 'total', data = df_results, color = '#ffffff', edgecolor='#3a4e5c')
ax = sns.barplot(x = 'Method', y = '2021 Voters Reached_%', data = df_results, color = '#4884af', edgecolor='#3a4e5c', palette='Blues_d')
# ax.axhline(50, ls='--', color='red')
plt.xlabel('')
plt.ylabel('2021 Voters (%)')
plt.title('2021 Voters Reached')
plt.show()
###Output
_____no_output_____
###Markdown
What was the effective cost of mailing per voting voter for each method?
###Code
#plot cost per voter reached
df_results = df_results_master.copy()
cost_per_mailer = 0.70
df_results['cost per voter'] = cost_per_mailer / df_results['Targeting Accuracy_%'] * 100
print(df_results['cost per voter'])
sns.set_theme()
sns.set(style='whitegrid', font="Rockwell")
ax = sns.barplot(x = 'Method', y = 'cost per voter', data = df_results, color = '#4884af', edgecolor='#3a4e5c', palette='Blues_d')
# ax.axhline(50, ls='--', color='red')
plt.xlabel('')
plt.ylabel('Cost ($)')
plt.title('Cost per Voting Voter Reached')
plt.show()
# plot voter reach as piechart for each method
df_results = df_results_master.copy()
df_results['2021 Voters not Reached_%'] = 100-df_results['2021 Voters Reached_%']
df_results['Method'] = df_results['Method'].str.replace('\n', ' ')
df_results = df_results[['Method','2021 Voters Reached_%', '2021 Voters not Reached_%']] \
.rename(columns={'2021 Voters not Reached_%': 'Voters Not Reached',
'2021 Voters Reached_%': 'Voters Reached'}) \
.set_index('Method').T
def plot_voter_piechart(df, y_column_name, title=None):
sns.set_theme()
sns.set(style='whitegrid', font="Rockwell")
if title is None:
title=y_column_name
df.plot.pie(y=y_column_name,
ylabel='',
title=title,
legend=False,
autopct = "%.1f%%",
colors = ['#6c9dbf', 'white'],
counterclock=False, startangle=-270,
wedgeprops={'edgecolor':'#3a4e5c','linewidth': 1, 'antialiased': True},
figsize=(4, 4))
plt.show()
plot_voter_piechart(df_results,'Random')
plot_voter_piechart(df_results,'Voted Last 3 Years')
plot_voter_piechart(df_results,'Voted Last 2 Years')
plot_voter_piechart(df_results,'Voted Last Year')
plot_voter_piechart(df_results,'Linear Regression Model')
df_results
###Output
_____no_output_____
###Markdown
PRIDICTIONS
###Code
prediction= lm.predict(X_test)
plt.scatter(y_test,prediction)
sns.distplot((y_test-prediction))
from sklearn import metrics
metrics.mean_absolute_error(y_test,prediction)
metrics.mean_squared_error(y_test,prediction)
np.sqrt(metrics.mean_squared_error(y_test,prediction))
###Output
_____no_output_____ |
Model/Remove_Outliers.ipynb | ###Markdown
Read The Data
###Code
mydata = pd.read_csv('All_astronomy.csv',sep=',',quotechar='"')
mydata['body'].head(1)[0]
for i in range(len(mydata)):
if isinstance(mydata['body'][i], float):
print i,mydata['body'][i],mydata['id'][i]
print len(mydata)
mydata =mydata.dropna()
print len(mydata)
print (mydata.q_score.min(),mydata.q_score.median(),mydata.q_score.mean(),mydata.q_score.max())
print (mydata.score.min(),mydata.score.median(),mydata.score.mean(),mydata.score.max())
fig = plt.figure()
ax = fig.add_subplot(111)
mydata.plot(kind='scatter', x='score', y='q_score',ylim=(mydata.q_score.min()-1,mydata.q_score.max()+10),\
xlim=(mydata.score.min()-1,mydata.score.max()+1),s=100,ax=ax)
ax.set_xlabel('Answer votes')
ax.set_ylabel('Question votes')
ax.yaxis.label.set_size(20)
ax.xaxis.label.set_size(20)
plt.show()
print len(mydata)
mydata = mydata[((mydata.score - mydata.score.mean()) / mydata.score.std()).abs() < 3]
print len(mydata)
mydata.to_csv('All_programmers_outliered.csv')
mydata.plot(kind='scatter', x='score', y='q_score')#.hist(stacked=True, bins=20)
plt.show()
mydata.score.std()
mydata = mydata.score
mydata.head(1)
mydata = mydata.cumsum()
plt.figure(); mydata.plot();
plt.show()
fig = plt.figure()
ax = fig.add_subplot(111)
ax.hist(mydata.score.values,40)
plt.title('Answers votes distribution')
plt.xlabel('Votes')
plt.ylabel('Frequency')
ax.yaxis.label.set_size(25)
ax.xaxis.label.set_size(25)
ax.title.set_size(25)
plt.show()
###Output
_____no_output_____ |
built_in_functions/filter.ipynb | ###Markdown
filter()As the name suggests, filter creates a list of elements for which a function returns true. Here is a simple example:
###Code
def negative(x):
return x < 0
number_list = list(range(-5, 5))
less_than_zero = list(filter(negative, number_list))
print(number_list)
print(less_than_zero)
###Output
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4]
[-5, -4, -3, -2, -1]
###Markdown
Just like `map()` you can use a lambda function for more concise code instead of defining one separately.
###Code
number_list = list(range(-5, 5))
less_than_zero = list(filter(lambda x: x < 0, number_list))
print(number_list)
print(less_than_zero)
###Output
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4]
[-5, -4, -3, -2, -1]
###Markdown
The filter resembles a for loop but it is a builtin function and faster.Note: If map & filter do not appear beautiful to you then you can also use list/dict/tuple comprehensions.
###Code
number_list = list(range(-5, 5))
less_than_zero = [x for x in number_list if x < 0]
print(number_list)
print(less_than_zero)
###Output
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4]
[-5, -4, -3, -2, -1]
###Markdown
You can use `filter()` for many things that require some condition, but just to give another example, youcan use it to get just the even or odd numbers of a list
###Code
number_list = list(range(-5, 5))
even_numbers = list(filter(lambda x : x % 2 == 0, number_list))
odd_numbers = list(filter(lambda x : x % 2 == 1, number_list))
print(number_list)
print(even_numbers)
print(odd_numbers)
###Output
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4]
[-4, -2, 0, 2, 4]
[-5, -3, -1, 1, 3]
|
Dimensionality Reduction/PCA/SparsePCA.ipynb | ###Markdown
Sparse PCA This code template is for Sparse Principal Component Analysis(SparsePCA) in python for dimensionality reduction technique.It is used to decompose a multivariate dataset into a set of successive orthogonal components that explain a maximum amount of the variance, keeping only the most significant singular vectors to project the data to a lower dimensional space. Required Packages
###Code
import warnings
import itertools
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
from sklearn.preprocessing import LabelEncoder
from sklearn.decomposition import SparsePCA
from numpy.linalg import eigh
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
#filepath
file_path= " "
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
#x_values
features=[]
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
#y_value
target=' '
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
X = df[features]
Y = df[target]
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
###Output
_____no_output_____
###Markdown
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
###Output
_____no_output_____
###Markdown
Choosing the number of componentsA vital part of using Sparse PCA in practice is the ability to estimate how many components are needed to describe the data. This can be determined by looking at the cumulative explained variance ratio as a function of the number of components.This curve quantifies how much of the total, dimensional variance is contained within the first N components. Explained Variance Explained variance refers to the variance explained by each of the principal components (eigenvectors). It can be represented as a function of ratio of related eigenvalue and sum of eigenvalues of all eigenvectors. The function below returns a list with the values of explained variance and also plots cumulative explained variance
###Code
def explained_variance_plot(X):
cov_matrix = np.cov(X, rowvar=False) #this function returns the co-variance matrix for the features
egnvalues, egnvectors = eigh(cov_matrix) #eigen decomposition is done here to fetch eigen-values and eigen-vectors
total_egnvalues = sum(egnvalues)
var_exp = [(i/total_egnvalues) for i in sorted(egnvalues, reverse=True)]
plt.plot(np.cumsum(var_exp))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
return var_exp
var_exp=explained_variance_plot(X)
###Output
_____no_output_____
###Markdown
Scree plotThe scree plot helps you to determine the optimal number of components. The eigenvalue of each component in the initial solution is plotted. Generally, you want to extract the components on the steep slope. The components on the shallow slope contribute little to the solution.
###Code
plt.plot(var_exp, 'ro-', linewidth=2)
plt.title('Scree Plot')
plt.xlabel('Principal Component')
plt.ylabel('Proportion of Variance Explained')
plt.show()
###Output
_____no_output_____
###Markdown
ModelSparse PCA is used to decompose a multivariate dataset in a set of successive orthogonal components that explain a maximum amount of the variance. In scikit-learn, Sparse PCA finds the set of sparse components that can optimally reconstruct the data. The amount of sparseness is controllable by the coefficient of the L1 penalty, given by the parameter alpha. Tunning parameters reference : [API](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.SparsePCA.html)
###Code
spca = SparsePCA(n_components=3)
spcaX = pd.DataFrame(data = spca.fit_transform(X))
###Output
_____no_output_____
###Markdown
Output Dataframe
###Code
finalDf = pd.concat([spcaX, Y], axis = 1)
finalDf.head()
###Output
_____no_output_____
###Markdown
Sparse PCA This code template is for Sparse Principal Component Analysis(SparsePCA) in python for dimensionality reduction technique.It is used to decompose a multivariate dataset into a set of successive orthogonal components that explain a maximum amount of the variance, keeping only the most significant singular vectors to project the data to a lower dimensional space. Required Packages
###Code
import warnings
import itertools
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
from sklearn.preprocessing import LabelEncoder
from sklearn.decomposition import SparsePCA
from numpy.linalg import eigh
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
#filepath
file_path= " "
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
#x_values
features=[]
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
#y_value
target=' '
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
X = df[features]
Y = df[target]
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
###Output
_____no_output_____
###Markdown
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
###Output
_____no_output_____
###Markdown
Choosing the number of componentsA vital part of using Sparse PCA in practice is the ability to estimate how many components are needed to describe the data. This can be determined by looking at the cumulative explained variance ratio as a function of the number of components.This curve quantifies how much of the total, dimensional variance is contained within the first N components. Explained Variance Explained variance refers to the variance explained by each of the principal components (eigenvectors). It can be represented as a function of ratio of related eigenvalue and sum of eigenvalues of all eigenvectors. The function below returns a list with the values of explained variance and also plots cumulative explained variance
###Code
def explained_variance_plot(X):
cov_matrix = np.cov(X, rowvar=False) #this function returns the co-variance matrix for the features
egnvalues, egnvectors = eigh(cov_matrix) #eigen decomposition is done here to fetch eigen-values and eigen-vectors
total_egnvalues = sum(egnvalues)
var_exp = [(i/total_egnvalues) for i in sorted(egnvalues, reverse=True)]
plt.plot(np.cumsum(var_exp))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
return var_exp
var_exp=explained_variance_plot(X)
###Output
_____no_output_____
###Markdown
Scree plotThe scree plot helps you to determine the optimal number of components. The eigenvalue of each component in the initial solution is plotted. Generally, you want to extract the components on the steep slope. The components on the shallow slope contribute little to the solution.
###Code
plt.plot(var_exp, 'ro-', linewidth=2)
plt.title('Scree Plot')
plt.xlabel('Principal Component')
plt.ylabel('Proportion of Variance Explained')
plt.show()
###Output
_____no_output_____
###Markdown
ModelSparse PCA is used to decompose a multivariate dataset in a set of successive orthogonal components that explain a maximum amount of the variance. In scikit-learn, Sparse PCA finds the set of sparse components that can optimally reconstruct the data. The amount of sparseness is controllable by the coefficient of the L1 penalty, given by the parameter alpha. Tunning parameters reference : [API](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.SparsePCA.html)
###Code
spca = SparsePCA(n_components=3)
spcaX = pd.DataFrame(data = spca.fit_transform(X))
###Output
_____no_output_____
###Markdown
Output Dataframe
###Code
finalDf = pd.concat([spcaX, Y], axis = 1)
finalDf.head()
###Output
_____no_output_____
###Markdown
Sparse PCA This code template is for Sparse Principal Component Analysis(SparsePCA) in python for dimensionality reduction technique.It is used to decompose a multivariate dataset into a set of successive orthogonal components that explain a maximum amount of the variance, keeping only the most significant singular vectors to project the data to a lower dimensional space. Required Packages
###Code
import warnings
import itertools
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
from sklearn.preprocessing import LabelEncoder
from sklearn.decomposition import SparsePCA
from numpy.linalg import eigh
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
#filepath
file_path= " "
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
#x_values
features=[]
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
#y_value
target=' '
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
X = df[features]
Y = df[target]
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
###Output
_____no_output_____
###Markdown
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
###Output
_____no_output_____
###Markdown
Choosing the number of componentsA vital part of using Sparse PCA in practice is the ability to estimate how many components are needed to describe the data. This can be determined by looking at the cumulative explained variance ratio as a function of the number of components.This curve quantifies how much of the total, dimensional variance is contained within the first N components. Explained Variance Explained variance refers to the variance explained by each of the principal components (eigenvectors). It can be represented as a function of ratio of related eigenvalue and sum of eigenvalues of all eigenvectors. The function below returns a list with the values of explained variance and also plots cumulative explained variance
###Code
def explained_variance_plot(X):
cov_matrix = np.cov(X, rowvar=False) #this function returns the co-variance matrix for the features
egnvalues, egnvectors = eigh(cov_matrix) #eigen decomposition is done here to fetch eigen-values and eigen-vectors
total_egnvalues = sum(egnvalues)
var_exp = [(i/total_egnvalues) for i in sorted(egnvalues, reverse=True)]
plt.plot(np.cumsum(var_exp))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
return var_exp
var_exp=explained_variance_plot(X)
###Output
_____no_output_____
###Markdown
Scree plotThe scree plot helps you to determine the optimal number of components. The eigenvalue of each component in the initial solution is plotted. Generally, you want to extract the components on the steep slope. The components on the shallow slope contribute little to the solution.
###Code
plt.plot(var_exp, 'ro-', linewidth=2)
plt.title('Scree Plot')
plt.xlabel('Principal Component')
plt.ylabel('Proportion of Variance Explained')
plt.show()
###Output
_____no_output_____
###Markdown
ModelSparse PCA is used to decompose a multivariate dataset in a set of successive orthogonal components that explain a maximum amount of the variance. In scikit-learn, Sparse PCA finds the set of sparse components that can optimally reconstruct the data. The amount of sparseness is controllable by the coefficient of the L1 penalty, given by the parameter alpha. Tunning parameters reference : [API](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.SparsePCA.html)
###Code
spca = SparsePCA(n_components=3)
spcaX = pd.DataFrame(data = spca.fit_transform(X))
###Output
_____no_output_____
###Markdown
Output Dataframe
###Code
finalDf = pd.concat([spcaX, Y], axis = 1)
finalDf.head()
###Output
_____no_output_____ |
12-Web-Scraping-and-Document-Databases/2/Activities/07-Ins_Splinter/Solved/.ipynb_checkpoints/Ins_Splinter-checkpoint.ipynb | ###Markdown
Mac Users
###Code
# https://splinter.readthedocs.io/en/latest/drivers/chrome.html
!which chromedriver
executable_path = {'executable_path': '/usr/local/bin/chromedriver'}
browser = Browser('chrome', **executable_path, headless=False)
###Output
_____no_output_____
###Markdown
Windows Users
###Code
executable_path = {'executable_path': 'chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=True)
url = 'http://quotes.toscrape.com/'
browser.visit(url)
for x in range(1, 6):
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
quotes = soup.find_all('span', class_='text')
for quote in quotes:
print('page:', x, '-------------')
print(quote.text)
browser.click_link_by_partial_text('Next')
###Output
page: 1 -------------
“The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.”
page: 1 -------------
“It is our choices, Harry, that show what we truly are, far more than our abilities.”
page: 1 -------------
“There are only two ways to live your life. One is as though nothing is a miracle. The other is as though everything is a miracle.”
page: 1 -------------
“The person, be it gentleman or lady, who has not pleasure in a good novel, must be intolerably stupid.”
page: 1 -------------
“Imperfection is beauty, madness is genius and it's better to be absolutely ridiculous than absolutely boring.”
page: 1 -------------
“Try not to become a man of success. Rather become a man of value.”
page: 1 -------------
“It is better to be hated for what you are than to be loved for what you are not.”
page: 1 -------------
“I have not failed. I've just found 10,000 ways that won't work.”
page: 1 -------------
“A woman is like a tea bag; you never know how strong it is until it's in hot water.”
page: 1 -------------
“A day without sunshine is like, you know, night.”
page: 2 -------------
“This life is what you make it. No matter what, you're going to mess up sometimes, it's a universal truth. But the good part is you get to decide how you're going to mess it up. Girls will be your friends - they'll act like it anyway. But just remember, some come, some go. The ones that stay with you through everything - they're your true best friends. Don't let go of them. Also remember, sisters make the best friends in the world. As for lovers, well, they'll come and go too. And baby, I hate to say it, most of them - actually pretty much all of them are going to break your heart, but you can't give up because if you give up, you'll never find your soulmate. You'll never find that half who makes you whole and that goes for everything. Just because you fail once, doesn't mean you're gonna fail at everything. Keep trying, hold on, and always, always, always believe in yourself, because if you don't, then who will, sweetie? So keep your head high, keep your chin up, and most importantly, keep smiling, because life's a beautiful thing and there's so much to smile about.”
page: 2 -------------
“It takes a great deal of bravery to stand up to our enemies, but just as much to stand up to our friends.”
page: 2 -------------
“If you can't explain it to a six year old, you don't understand it yourself.”
page: 2 -------------
“You may not be her first, her last, or her only. She loved before she may love again. But if she loves you now, what else matters? She's not perfect—you aren't either, and the two of you may never be perfect together but if she can make you laugh, cause you to think twice, and admit to being human and making mistakes, hold onto her and give her the most you can. She may not be thinking about you every second of the day, but she will give you a part of her that she knows you can break—her heart. So don't hurt her, don't change her, don't analyze and don't expect more than she can give. Smile when she makes you happy, let her know when she makes you mad, and miss her when she's not there.”
page: 2 -------------
“I like nonsense, it wakes up the brain cells. Fantasy is a necessary ingredient in living.”
page: 2 -------------
“I may not have gone where I intended to go, but I think I have ended up where I needed to be.”
page: 2 -------------
“The opposite of love is not hate, it's indifference. The opposite of art is not ugliness, it's indifference. The opposite of faith is not heresy, it's indifference. And the opposite of life is not death, it's indifference.”
page: 2 -------------
“It is not a lack of love, but a lack of friendship that makes unhappy marriages.”
page: 2 -------------
“Good friends, good books, and a sleepy conscience: this is the ideal life.”
page: 2 -------------
“Life is what happens to us while we are making other plans.”
page: 3 -------------
“I love you without knowing how, or when, or from where. I love you simply, without problems or pride: I love you in this way because I do not know any other way of loving but this, in which there is no I or you, so intimate that your hand upon my chest is my hand, so intimate that when I fall asleep your eyes close.”
page: 3 -------------
“For every minute you are angry you lose sixty seconds of happiness.”
page: 3 -------------
“If you judge people, you have no time to love them.”
page: 3 -------------
“Anyone who thinks sitting in church can make you a Christian must also think that sitting in a garage can make you a car.”
page: 3 -------------
“Beauty is in the eye of the beholder and it may be necessary from time to time to give a stupid or misinformed beholder a black eye.”
page: 3 -------------
“Today you are You, that is truer than true. There is no one alive who is Youer than You.”
page: 3 -------------
“If you want your children to be intelligent, read them fairy tales. If you want them to be more intelligent, read them more fairy tales.”
page: 3 -------------
“It is impossible to live without failing at something, unless you live so cautiously that you might as well not have lived at all - in which case, you fail by default.”
page: 3 -------------
“Logic will get you from A to Z; imagination will get you everywhere.”
page: 3 -------------
“One good thing about music, when it hits you, you feel no pain.”
page: 4 -------------
“The more that you read, the more things you will know. The more that you learn, the more places you'll go.”
page: 4 -------------
“Of course it is happening inside your head, Harry, but why on earth should that mean that it is not real?”
page: 4 -------------
“The truth is, everyone is going to hurt you. You just got to find the ones worth suffering for.”
page: 4 -------------
“Not all of us can do great things. But we can do small things with great love.”
page: 4 -------------
“To the well-organized mind, death is but the next great adventure.”
page: 4 -------------
“All you need is love. But a little chocolate now and then doesn't hurt.”
page: 4 -------------
“We read to know we're not alone.”
page: 4 -------------
“Any fool can know. The point is to understand.”
page: 4 -------------
“I have always imagined that Paradise will be a kind of library.”
page: 4 -------------
“It is never too late to be what you might have been.”
page: 5 -------------
“A reader lives a thousand lives before he dies, said Jojen. The man who never reads lives only one.”
page: 5 -------------
“You can never get a cup of tea large enough or a book long enough to suit me.”
page: 5 -------------
“You believe lies so you eventually learn to trust no one but yourself.”
page: 5 -------------
“If you can make a woman laugh, you can make her do anything.”
page: 5 -------------
“Life is like riding a bicycle. To keep your balance, you must keep moving.”
page: 5 -------------
“The real lover is the man who can thrill you by kissing your forehead or smiling into your eyes or just staring into space.”
page: 5 -------------
“A wise girl kisses but doesn't love, listens but doesn't believe, and leaves before she is left.”
page: 5 -------------
“Only in the darkness can you see the stars.”
page: 5 -------------
“It matters not what someone is born, but what they grow to be.”
page: 5 -------------
“Love does not begin and end the way we seem to think it does. Love is a battle, love is a war; love is a growing up.”
|
data structure/array and linked list/Pascal's-Triangle.ipynb | ###Markdown
Problem StatementFind and return the `nth` row of Pascal's triangle in the form a list. `n` is 0-based.For exmaple, if `n = 4`, then `output = [1, 4, 6, 4, 1]`.To know more about Pascal's triangle: https://www.mathsisfun.com/pascals-triangle.html
###Code
def nth_row_pascal(n):
"""
:param: - n - index (0 based)
return - list() representing nth row of Pascal's triangle
"""
###Output
_____no_output_____
###Markdown
Show Solution
###Code
def test_function(test_case):
n = test_case[0]
solution = test_case[1]
output = nth_row_pascal(n)
if solution == output:
print("Pass")
else:
print("Fail")
n = 0
solution = [1]
test_case = [n, solution]
test_function(test_case)
n = 1
solution = [1, 1]
test_case = [n, solution]
test_function(test_case)
n = 2
solution = [1, 2, 1]
test_case = [n, solution]
test_function(test_case)
n = 3
solution = [1, 3, 3, 1]
test_case = [n, solution]
test_function(test_case)
n = 4
solution = [1, 4, 6, 4, 1]
test_case = [n, solution]
test_function(test_case)
###Output
Pass
|
user_docs/process_blobs.ipynb | ###Markdown
Segment + analyze blobsIn this notebook we demonstrate how images can be processed on GPUs, objects segmented and afterwards measured with [scikit-image](https://scikit-image.org/).
###Code
from pyclesperanto import cle
from skimage.io import imread
from skimage.measure import regionprops_table
import pandas as pd
###Output
_____no_output_____
###Markdown
We first load an image using scikit-image's `imread()` function and visualize it using clesperanto's `imshow()` funciton, that under the hood uses similar functionality like scikit-image for showing images.
###Code
image = imread("https://imagej.nih.gov/ij/images/blobs.gif")
cle.imshow(image)
###Output
_____no_output_____
###Markdown
We invert the image
###Code
inverted_image = cle.subtract_image_from_scalar(image, scalar=255)
cle.imshow(inverted_image)
###Output
_____no_output_____
###Markdown
We can blur this image using a `gaussian_blur` filter. All filters and image processing operations are available via the `cle.` gateway.
###Code
blurred_image = cle.gaussian_blur(inverted_image, sigma_x=3, sigma_y=3)
cle.imshow(blurred_image)
###Output
_____no_output_____
###Markdown
Also thresholding and connected component labeling work similarly via the `cle` gateway. Furthermore, the `imshow` function has some convenience built-in for visualizing label images of segmented blobs.
###Code
binary_image = cle.threshold_otsu(blurred_image)
label_image = cle.connected_components_labeling_box(binary_image)
cle.imshow(label_image, labels=True)
###Output
_____no_output_____
###Markdown
Before we can pass the resulting label image to another function, e.g. from scikit-image, we need to pull it back to CPU memory and with that convert it into a numpy array.
###Code
numpy_label_image = cle.pull(label_image)
table = regionprops_table(image, numpy_label_image, properties=['label', 'area', 'mean_intensity'])
pd.DataFrame(table)
###Output
_____no_output_____ |
Data_Science_from_Scratch ~ Book/Data_Science_Chapter_9.ipynb | ###Markdown
Getting Data
###Code
# script to read lines of text and spits backout the ones
# that match a regular experssion
import sys, re
regex = sys.argv[1]
for line in sys.stdin:
if re.search(regex, line):
sys.stdout.wrtie(line)
import sys
count = 0
for line in sys.stdin:
count += 1
print(count)
###Output
0
###Markdown
Working with some sample text
###Code
with open("sometext.txt") as f:
for line in f:
print(line)
# content from - https://en.wikipedia.org/wiki/Text_(literary_theory)
starts_with_A = 0
with open("sometext.txt") as f:
for line in f:
if re.match("^A",line):
starts_with_A += 1
print(starts_with_A)
starts_with_I = 0
with open("sometext.txt") as f:
for line in f:
if re.match("^I",line):
starts_with_I += 1
print(starts_with_I)
def get_domain(email:str) -> str:
return email.lower().split("@")[-1]
email = "[email protected]"
get_domain(email)
from collections import Counter
with open('emails.txt','r') as f:
domain_counts = Counter(get_domain(line.strip())
for line in f
if"@" in line)
print(domain_counts)
###Output
Counter({'mail.com': 1, 'gmail.com': 1, '123_mail.com': 1, 'science.com': 1})
###Markdown
Delimiter Files
###Code
import csv
with open('tab_delimited_stock_prices.txt') as f:
tab_reader = csv.reader(f, delimiter='\t')
for row in tab_reader:
date = row[0]
symbol = row[1]
closing_price = float(row[2])
print(date,symbol,closing_price)
with open('colon_delimited_stock_prices.txt') as f:
colon_reader = csv.DictReader(f, delimiter=':')
for dict_row in colon_reader:
date = dict_row["date"]
symbol = dict_row["symbol"]
closing_price = float(dict_row["closing_price"])
print(date, symbol, closing_price)
print(dict_row)
todays_prices = {'AAPL': 90.91, 'MSFT': 41.68, 'FB': 64.5 }
with open('comma_delimited_stock_prices.txt', 'w') as f:
csv_writer = csv.writer(f, delimiter=',')
for stock, price in todays_prices.items():
csv_writer.writerow([stock, price])
###Output
_____no_output_____
###Markdown
Scraping the Web
###Code
from bs4 import BeautifulSoup
import requests
url = ("https://raw.githubusercontent.com/joelgrus/data/master/getting-data.html")
html = requests.get(url).text
soup = BeautifulSoup(html, 'html.parser')
first_paragraph = soup.find('p')
print(first_paragraph)
first_paragraph_text = soup.p.text
first_paragraph_words = soup.p.text.split()
print(first_paragraph_text)
print()
print(first_paragraph_words)
first_paragraph_id = soup.p['id']
first_paragraph_id_2 = soup.p.get('id')
print(first_paragraph_id)
print()
print(first_paragraph_id_2)
all_paragraphs = soup.find_all('p')
print(all_paragraphs)
paragraphs_with_id = [p for p in soup('p') if p.get('id')]
print(paragraphs_with_id)
important_paragraphs = soup('p', {'class' : 'important'})
important_paragraphs2 = soup('p', 'important')
important_paragraphs3 = [p for p in soup('p')
if 'important' in p.get('class', [])]
print(important_paragraphs)
print()
print(important_paragraphs2)
print()
print(important_paragraphs3)
print()
spans_inside_divs = [span
for div in soup('div')
for span in div('span')]
###Output
_____no_output_____
###Markdown
Example for Web Scraping
###Code
from bs4 import BeautifulSoup
import requests
url = "https://www.house.gov/representatives"
text = requests.get(url).text
soup = BeautifulSoup(text, "html.parser")
all_urls = [a['href']
for a in soup('a')
if a.has_attr('href')]
print(len(all_urls))
import re
import pandas as pd
regex = r"https?://.*\.house\.gov/?$"
good_urls = [url for url in all_urls if re.match(regex,url)]
print(len(good_urls))
good_urls = list(set(good_urls))
print(len(good_urls))
html = requests.get('https://jayapal.house.gov').text
soup = BeautifulSoup(html, 'html.parser')
links = {a['href'] for a in soup('a') if 'press releases' in a.text.lower()}
print(links)
from typing import Dict, Set
press_releases: Dict[str, Set[str]] = {}
for house_url in good_urls:
html = requests.get(house_url).text
soup = BeautifulSoup(html, 'html.parser')
pr_links = {a['href'] for a in soup('a') if 'press releases'
in a.text.lower()}
print(f"{house_url}: {pr_links}")
press_releases[house_url] = pr_links
###Output
_____no_output_____
###Markdown
The above cell gives the following outputhttps://carbajal.house.gov: set()https://takano.house.gov: {'https://takano.house.gov/newsroom/press-releases'}https://rice.house.gov: set()https://desaulnier.house.gov/: {'/media-center/press-releases'}https://maloney.house.gov/: {'/news/press-releases'}https://chrissmith.house.gov/: set()https://rodneydavis.house.gov: set()https://bost.house.gov/: {'/media-center/press-releases'}https://joyce.house.gov: {'/press-releases/'}https://wittman.house.gov/: set()https://doyle.house.gov: {'/media/press-releases'}https://omar.house.gov/: {'/media/press-releases'}https://smucker.house.gov/: {'/media/press-releases'}vhttps://moolenaar.house.gov/: {'/media-center/press-releases'}https://perlmutter.house.gov/: set()https://mceachin.house.gov: {'/media/press-releases'}https://phillips.house.gov/: {'/media/press-releases'}https://seanmaloney.house.gov: set().................................. APIsJSON : similar to Python DictionaryXML: similar to data from HTML
###Code
import json
serialized = """{ "title" : "Data Science Book",
"author" : "Joel Grus",
"publicationYear" : 2019,
"topics" : [ "data", "science", "data science"] }"""
deserialized = json.loads(serialized)
print(serialized)
print()
print(deserialized)
import requests, json
github_user = "kushagras71"
endpoint = f"https://api.github.com/users/{github_user}/repos"
repos = json.loads(requests.get(endpoint).text)
repos
!python -m pip install python-dateutil
from collections import Counter
from dateutil.parser import parse
dates = [parse(repo['created_at'])for repo in repos]
month_counts = Counter(date.month for date in dates)
weekday_counts = Counter(date.weekday() for date in dates)
print(dates)
print()
print(month_counts)
print()
print(weekday_counts)
last_5_repositories = sorted(repos,
key=lambda r: r["pushed_at"],
reverse=True)[:5]
last_5_languages = [repo["language"]
for repo in last_5_repositories]
print(last_5_languages)
print()
print(last_5_repositories)
###Output
['Jupyter Notebook', 'Jupyter Notebook', None, 'Jupyter Notebook', 'Jupyter Notebook']
[{'id': 295847929, 'node_id': 'MDEwOlJlcG9zaXRvcnkyOTU4NDc5Mjk=', 'name': 'data_science', 'full_name': 'kushagras71/data_science', 'private': False, 'owner': {'login': 'kushagras71', 'id': 58633364, 'node_id': 'MDQ6VXNlcjU4NjMzMzY0', 'avatar_url': 'https://avatars0.githubusercontent.com/u/58633364?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/kushagras71', 'html_url': 'https://github.com/kushagras71', 'followers_url': 'https://api.github.com/users/kushagras71/followers', 'following_url': 'https://api.github.com/users/kushagras71/following{/other_user}', 'gists_url': 'https://api.github.com/users/kushagras71/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/kushagras71/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/kushagras71/subscriptions', 'organizations_url': 'https://api.github.com/users/kushagras71/orgs', 'repos_url': 'https://api.github.com/users/kushagras71/repos', 'events_url': 'https://api.github.com/users/kushagras71/events{/privacy}', 'received_events_url': 'https://api.github.com/users/kushagras71/received_events', 'type': 'User', 'site_admin': False}, 'html_url': 'https://github.com/kushagras71/data_science', 'description': 'Just some code for data science', 'fork': False, 'url': 'https://api.github.com/repos/kushagras71/data_science', 'forks_url': 'https://api.github.com/repos/kushagras71/data_science/forks', 'keys_url': 'https://api.github.com/repos/kushagras71/data_science/keys{/key_id}', 'collaborators_url': 'https://api.github.com/repos/kushagras71/data_science/collaborators{/collaborator}', 'teams_url': 'https://api.github.com/repos/kushagras71/data_science/teams', 'hooks_url': 'https://api.github.com/repos/kushagras71/data_science/hooks', 'issue_events_url': 'https://api.github.com/repos/kushagras71/data_science/issues/events{/number}', 'events_url': 'https://api.github.com/repos/kushagras71/data_science/events', 'assignees_url': 'https://api.github.com/repos/kushagras71/data_science/assignees{/user}', 'branches_url': 'https://api.github.com/repos/kushagras71/data_science/branches{/branch}', 'tags_url': 'https://api.github.com/repos/kushagras71/data_science/tags', 'blobs_url': 'https://api.github.com/repos/kushagras71/data_science/git/blobs{/sha}', 'git_tags_url': 'https://api.github.com/repos/kushagras71/data_science/git/tags{/sha}', 'git_refs_url': 'https://api.github.com/repos/kushagras71/data_science/git/refs{/sha}', 'trees_url': 'https://api.github.com/repos/kushagras71/data_science/git/trees{/sha}', 'statuses_url': 'https://api.github.com/repos/kushagras71/data_science/statuses/{sha}', 'languages_url': 'https://api.github.com/repos/kushagras71/data_science/languages', 'stargazers_url': 'https://api.github.com/repos/kushagras71/data_science/stargazers', 'contributors_url': 'https://api.github.com/repos/kushagras71/data_science/contributors', 'subscribers_url': 'https://api.github.com/repos/kushagras71/data_science/subscribers', 'subscription_url': 'https://api.github.com/repos/kushagras71/data_science/subscription', 'commits_url': 'https://api.github.com/repos/kushagras71/data_science/commits{/sha}', 'git_commits_url': 'https://api.github.com/repos/kushagras71/data_science/git/commits{/sha}', 'comments_url': 'https://api.github.com/repos/kushagras71/data_science/comments{/number}', 'issue_comment_url': 'https://api.github.com/repos/kushagras71/data_science/issues/comments{/number}', 'contents_url': 'https://api.github.com/repos/kushagras71/data_science/contents/{+path}', 'compare_url': 'https://api.github.com/repos/kushagras71/data_science/compare/{base}...{head}', 'merges_url': 'https://api.github.com/repos/kushagras71/data_science/merges', 'archive_url': 'https://api.github.com/repos/kushagras71/data_science/{archive_format}{/ref}', 'downloads_url': 'https://api.github.com/repos/kushagras71/data_science/downloads', 'issues_url': 'https://api.github.com/repos/kushagras71/data_science/issues{/number}', 'pulls_url': 'https://api.github.com/repos/kushagras71/data_science/pulls{/number}', 'milestones_url': 'https://api.github.com/repos/kushagras71/data_science/milestones{/number}', 'notifications_url': 'https://api.github.com/repos/kushagras71/data_science/notifications{?since,all,participating}', 'labels_url': 'https://api.github.com/repos/kushagras71/data_science/labels{/name}', 'releases_url': 'https://api.github.com/repos/kushagras71/data_science/releases{/id}', 'deployments_url': 'https://api.github.com/repos/kushagras71/data_science/deployments', 'created_at': '2020-09-15T20:59:01Z', 'updated_at': '2020-10-01T18:29:22Z', 'pushed_at': '2020-10-01T18:29:20Z', 'git_url': 'git://github.com/kushagras71/data_science.git', 'ssh_url': '[email protected]:kushagras71/data_science.git', 'clone_url': 'https://github.com/kushagras71/data_science.git', 'svn_url': 'https://github.com/kushagras71/data_science', 'homepage': None, 'size': 235, 'stargazers_count': 0, 'watchers_count': 0, 'language': 'Jupyter Notebook', 'has_issues': True, 'has_projects': True, 'has_downloads': True, 'has_wiki': True, 'has_pages': False, 'forks_count': 0, 'mirror_url': None, 'archived': False, 'disabled': False, 'open_issues_count': 0, 'license': {'key': 'apache-2.0', 'name': 'Apache License 2.0', 'spdx_id': 'Apache-2.0', 'url': 'https://api.github.com/licenses/apache-2.0', 'node_id': 'MDc6TGljZW5zZTI='}, 'forks': 0, 'open_issues': 0, 'watchers': 0, 'default_branch': 'master'}, {'id': 276174944, 'node_id': 'MDEwOlJlcG9zaXRvcnkyNzYxNzQ5NDQ=', 'name': 'natural_language_processing', 'full_name': 'kushagras71/natural_language_processing', 'private': False, 'owner': {'login': 'kushagras71', 'id': 58633364, 'node_id': 'MDQ6VXNlcjU4NjMzMzY0', 'avatar_url': 'https://avatars0.githubusercontent.com/u/58633364?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/kushagras71', 'html_url': 'https://github.com/kushagras71', 'followers_url': 'https://api.github.com/users/kushagras71/followers', 'following_url': 'https://api.github.com/users/kushagras71/following{/other_user}', 'gists_url': 'https://api.github.com/users/kushagras71/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/kushagras71/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/kushagras71/subscriptions', 'organizations_url': 'https://api.github.com/users/kushagras71/orgs', 'repos_url': 'https://api.github.com/users/kushagras71/repos', 'events_url': 'https://api.github.com/users/kushagras71/events{/privacy}', 'received_events_url': 'https://api.github.com/users/kushagras71/received_events', 'type': 'User', 'site_admin': False}, 'html_url': 'https://github.com/kushagras71/natural_language_processing', 'description': 'Natural Language Processing', 'fork': False, 'url': 'https://api.github.com/repos/kushagras71/natural_language_processing', 'forks_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/forks', 'keys_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/keys{/key_id}', 'collaborators_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/collaborators{/collaborator}', 'teams_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/teams', 'hooks_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/hooks', 'issue_events_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/issues/events{/number}', 'events_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/events', 'assignees_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/assignees{/user}', 'branches_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/branches{/branch}', 'tags_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/tags', 'blobs_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/git/blobs{/sha}', 'git_tags_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/git/tags{/sha}', 'git_refs_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/git/refs{/sha}', 'trees_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/git/trees{/sha}', 'statuses_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/statuses/{sha}', 'languages_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/languages', 'stargazers_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/stargazers', 'contributors_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/contributors', 'subscribers_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/subscribers', 'subscription_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/subscription', 'commits_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/commits{/sha}', 'git_commits_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/git/commits{/sha}', 'comments_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/comments{/number}', 'issue_comment_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/issues/comments{/number}', 'contents_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/contents/{+path}', 'compare_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/compare/{base}...{head}', 'merges_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/merges', 'archive_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/{archive_format}{/ref}', 'downloads_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/downloads', 'issues_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/issues{/number}', 'pulls_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/pulls{/number}', 'milestones_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/milestones{/number}', 'notifications_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/notifications{?since,all,participating}', 'labels_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/labels{/name}', 'releases_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/releases{/id}', 'deployments_url': 'https://api.github.com/repos/kushagras71/natural_language_processing/deployments', 'created_at': '2020-06-30T18:11:12Z', 'updated_at': '2020-09-30T05:57:43Z', 'pushed_at': '2020-09-30T05:57:41Z', 'git_url': 'git://github.com/kushagras71/natural_language_processing.git', 'ssh_url': '[email protected]:kushagras71/natural_language_processing.git', 'clone_url': 'https://github.com/kushagras71/natural_language_processing.git', 'svn_url': 'https://github.com/kushagras71/natural_language_processing', 'homepage': None, 'size': 1596, 'stargazers_count': 0, 'watchers_count': 0, 'language': 'Jupyter Notebook', 'has_issues': True, 'has_projects': True, 'has_downloads': True, 'has_wiki': True, 'has_pages': False, 'forks_count': 0, 'mirror_url': None, 'archived': False, 'disabled': False, 'open_issues_count': 0, 'license': {'key': 'apache-2.0', 'name': 'Apache License 2.0', 'spdx_id': 'Apache-2.0', 'url': 'https://api.github.com/licenses/apache-2.0', 'node_id': 'MDc6TGljZW5zZTI='}, 'forks': 0, 'open_issues': 0, 'watchers': 0, 'default_branch': 'master'}, {'id': 298047995, 'node_id': 'MDEwOlJlcG9zaXRvcnkyOTgwNDc5OTU=', 'name': 'the_research_papers_you_need', 'full_name': 'kushagras71/the_research_papers_you_need', 'private': False, 'owner': {'login': 'kushagras71', 'id': 58633364, 'node_id': 'MDQ6VXNlcjU4NjMzMzY0', 'avatar_url': 'https://avatars0.githubusercontent.com/u/58633364?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/kushagras71', 'html_url': 'https://github.com/kushagras71', 'followers_url': 'https://api.github.com/users/kushagras71/followers', 'following_url': 'https://api.github.com/users/kushagras71/following{/other_user}', 'gists_url': 'https://api.github.com/users/kushagras71/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/kushagras71/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/kushagras71/subscriptions', 'organizations_url': 'https://api.github.com/users/kushagras71/orgs', 'repos_url': 'https://api.github.com/users/kushagras71/repos', 'events_url': 'https://api.github.com/users/kushagras71/events{/privacy}', 'received_events_url': 'https://api.github.com/users/kushagras71/received_events', 'type': 'User', 'site_admin': False}, 'html_url': 'https://github.com/kushagras71/the_research_papers_you_need', 'description': None, 'fork': False, 'url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need', 'forks_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/forks', 'keys_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/keys{/key_id}', 'collaborators_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/collaborators{/collaborator}', 'teams_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/teams', 'hooks_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/hooks', 'issue_events_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/issues/events{/number}', 'events_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/events', 'assignees_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/assignees{/user}', 'branches_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/branches{/branch}', 'tags_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/tags', 'blobs_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/git/blobs{/sha}', 'git_tags_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/git/tags{/sha}', 'git_refs_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/git/refs{/sha}', 'trees_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/git/trees{/sha}', 'statuses_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/statuses/{sha}', 'languages_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/languages', 'stargazers_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/stargazers', 'contributors_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/contributors', 'subscribers_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/subscribers', 'subscription_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/subscription', 'commits_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/commits{/sha}', 'git_commits_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/git/commits{/sha}', 'comments_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/comments{/number}', 'issue_comment_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/issues/comments{/number}', 'contents_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/contents/{+path}', 'compare_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/compare/{base}...{head}', 'merges_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/merges', 'archive_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/{archive_format}{/ref}', 'downloads_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/downloads', 'issues_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/issues{/number}', 'pulls_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/pulls{/number}', 'milestones_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/milestones{/number}', 'notifications_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/notifications{?since,all,participating}', 'labels_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/labels{/name}', 'releases_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/releases{/id}', 'deployments_url': 'https://api.github.com/repos/kushagras71/the_research_papers_you_need/deployments', 'created_at': '2020-09-23T17:40:34Z', 'updated_at': '2020-09-27T05:25:10Z', 'pushed_at': '2020-09-27T05:25:08Z', 'git_url': 'git://github.com/kushagras71/the_research_papers_you_need.git', 'ssh_url': '[email protected]:kushagras71/the_research_papers_you_need.git', 'clone_url': 'https://github.com/kushagras71/the_research_papers_you_need.git', 'svn_url': 'https://github.com/kushagras71/the_research_papers_you_need', 'homepage': None, 'size': 17091, 'stargazers_count': 0, 'watchers_count': 0, 'language': None, 'has_issues': True, 'has_projects': True, 'has_downloads': True, 'has_wiki': True, 'has_pages': False, 'forks_count': 0, 'mirror_url': None, 'archived': False, 'disabled': False, 'open_issues_count': 0, 'license': {'key': 'apache-2.0', 'name': 'Apache License 2.0', 'spdx_id': 'Apache-2.0', 'url': 'https://api.github.com/licenses/apache-2.0', 'node_id': 'MDc6TGljZW5zZTI='}, 'forks': 0, 'open_issues': 0, 'watchers': 0, 'default_branch': 'master'}, {'id': 275891560, 'node_id': 'MDEwOlJlcG9zaXRvcnkyNzU4OTE1NjA=', 'name': 'understanding_ML', 'full_name': 'kushagras71/understanding_ML', 'private': False, 'owner': {'login': 'kushagras71', 'id': 58633364, 'node_id': 'MDQ6VXNlcjU4NjMzMzY0', 'avatar_url': 'https://avatars0.githubusercontent.com/u/58633364?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/kushagras71', 'html_url': 'https://github.com/kushagras71', 'followers_url': 'https://api.github.com/users/kushagras71/followers', 'following_url': 'https://api.github.com/users/kushagras71/following{/other_user}', 'gists_url': 'https://api.github.com/users/kushagras71/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/kushagras71/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/kushagras71/subscriptions', 'organizations_url': 'https://api.github.com/users/kushagras71/orgs', 'repos_url': 'https://api.github.com/users/kushagras71/repos', 'events_url': 'https://api.github.com/users/kushagras71/events{/privacy}', 'received_events_url': 'https://api.github.com/users/kushagras71/received_events', 'type': 'User', 'site_admin': False}, 'html_url': 'https://github.com/kushagras71/understanding_ML', 'description': 'The building blocks of various neural networks.', 'fork': False, 'url': 'https://api.github.com/repos/kushagras71/understanding_ML', 'forks_url': 'https://api.github.com/repos/kushagras71/understanding_ML/forks', 'keys_url': 'https://api.github.com/repos/kushagras71/understanding_ML/keys{/key_id}', 'collaborators_url': 'https://api.github.com/repos/kushagras71/understanding_ML/collaborators{/collaborator}', 'teams_url': 'https://api.github.com/repos/kushagras71/understanding_ML/teams', 'hooks_url': 'https://api.github.com/repos/kushagras71/understanding_ML/hooks', 'issue_events_url': 'https://api.github.com/repos/kushagras71/understanding_ML/issues/events{/number}', 'events_url': 'https://api.github.com/repos/kushagras71/understanding_ML/events', 'assignees_url': 'https://api.github.com/repos/kushagras71/understanding_ML/assignees{/user}', 'branches_url': 'https://api.github.com/repos/kushagras71/understanding_ML/branches{/branch}', 'tags_url': 'https://api.github.com/repos/kushagras71/understanding_ML/tags', 'blobs_url': 'https://api.github.com/repos/kushagras71/understanding_ML/git/blobs{/sha}', 'git_tags_url': 'https://api.github.com/repos/kushagras71/understanding_ML/git/tags{/sha}', 'git_refs_url': 'https://api.github.com/repos/kushagras71/understanding_ML/git/refs{/sha}', 'trees_url': 'https://api.github.com/repos/kushagras71/understanding_ML/git/trees{/sha}', 'statuses_url': 'https://api.github.com/repos/kushagras71/understanding_ML/statuses/{sha}', 'languages_url': 'https://api.github.com/repos/kushagras71/understanding_ML/languages', 'stargazers_url': 'https://api.github.com/repos/kushagras71/understanding_ML/stargazers', 'contributors_url': 'https://api.github.com/repos/kushagras71/understanding_ML/contributors', 'subscribers_url': 'https://api.github.com/repos/kushagras71/understanding_ML/subscribers', 'subscription_url': 'https://api.github.com/repos/kushagras71/understanding_ML/subscription', 'commits_url': 'https://api.github.com/repos/kushagras71/understanding_ML/commits{/sha}', 'git_commits_url': 'https://api.github.com/repos/kushagras71/understanding_ML/git/commits{/sha}', 'comments_url': 'https://api.github.com/repos/kushagras71/understanding_ML/comments{/number}', 'issue_comment_url': 'https://api.github.com/repos/kushagras71/understanding_ML/issues/comments{/number}', 'contents_url': 'https://api.github.com/repos/kushagras71/understanding_ML/contents/{+path}', 'compare_url': 'https://api.github.com/repos/kushagras71/understanding_ML/compare/{base}...{head}', 'merges_url': 'https://api.github.com/repos/kushagras71/understanding_ML/merges', 'archive_url': 'https://api.github.com/repos/kushagras71/understanding_ML/{archive_format}{/ref}', 'downloads_url': 'https://api.github.com/repos/kushagras71/understanding_ML/downloads', 'issues_url': 'https://api.github.com/repos/kushagras71/understanding_ML/issues{/number}', 'pulls_url': 'https://api.github.com/repos/kushagras71/understanding_ML/pulls{/number}', 'milestones_url': 'https://api.github.com/repos/kushagras71/understanding_ML/milestones{/number}', 'notifications_url': 'https://api.github.com/repos/kushagras71/understanding_ML/notifications{?since,all,participating}', 'labels_url': 'https://api.github.com/repos/kushagras71/understanding_ML/labels{/name}', 'releases_url': 'https://api.github.com/repos/kushagras71/understanding_ML/releases{/id}', 'deployments_url': 'https://api.github.com/repos/kushagras71/understanding_ML/deployments', 'created_at': '2020-06-29T18:10:02Z', 'updated_at': '2020-09-15T11:00:03Z', 'pushed_at': '2020-09-15T11:00:01Z', 'git_url': 'git://github.com/kushagras71/understanding_ML.git', 'ssh_url': '[email protected]:kushagras71/understanding_ML.git', 'clone_url': 'https://github.com/kushagras71/understanding_ML.git', 'svn_url': 'https://github.com/kushagras71/understanding_ML', 'homepage': None, 'size': 5110, 'stargazers_count': 0, 'watchers_count': 0, 'language': 'Jupyter Notebook', 'has_issues': True, 'has_projects': True, 'has_downloads': True, 'has_wiki': True, 'has_pages': False, 'forks_count': 0, 'mirror_url': None, 'archived': False, 'disabled': False, 'open_issues_count': 0, 'license': {'key': 'apache-2.0', 'name': 'Apache License 2.0', 'spdx_id': 'Apache-2.0', 'url': 'https://api.github.com/licenses/apache-2.0', 'node_id': 'MDc6TGljZW5zZTI='}, 'forks': 0, 'open_issues': 0, 'watchers': 0, 'default_branch': 'master'}, {'id': 266389591, 'node_id': 'MDEwOlJlcG9zaXRvcnkyNjYzODk1OTE=', 'name': 'AutoEncoders_and_GANs', 'full_name': 'kushagras71/AutoEncoders_and_GANs', 'private': False, 'owner': {'login': 'kushagras71', 'id': 58633364, 'node_id': 'MDQ6VXNlcjU4NjMzMzY0', 'avatar_url': 'https://avatars0.githubusercontent.com/u/58633364?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/kushagras71', 'html_url': 'https://github.com/kushagras71', 'followers_url': 'https://api.github.com/users/kushagras71/followers', 'following_url': 'https://api.github.com/users/kushagras71/following{/other_user}', 'gists_url': 'https://api.github.com/users/kushagras71/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/kushagras71/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/kushagras71/subscriptions', 'organizations_url': 'https://api.github.com/users/kushagras71/orgs', 'repos_url': 'https://api.github.com/users/kushagras71/repos', 'events_url': 'https://api.github.com/users/kushagras71/events{/privacy}', 'received_events_url': 'https://api.github.com/users/kushagras71/received_events', 'type': 'User', 'site_admin': False}, 'html_url': 'https://github.com/kushagras71/AutoEncoders_and_GANs', 'description': None, 'fork': False, 'url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs', 'forks_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/forks', 'keys_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/keys{/key_id}', 'collaborators_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/collaborators{/collaborator}', 'teams_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/teams', 'hooks_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/hooks', 'issue_events_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/issues/events{/number}', 'events_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/events', 'assignees_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/assignees{/user}', 'branches_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/branches{/branch}', 'tags_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/tags', 'blobs_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/git/blobs{/sha}', 'git_tags_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/git/tags{/sha}', 'git_refs_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/git/refs{/sha}', 'trees_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/git/trees{/sha}', 'statuses_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/statuses/{sha}', 'languages_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/languages', 'stargazers_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/stargazers', 'contributors_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/contributors', 'subscribers_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/subscribers', 'subscription_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/subscription', 'commits_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/commits{/sha}', 'git_commits_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/git/commits{/sha}', 'comments_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/comments{/number}', 'issue_comment_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/issues/comments{/number}', 'contents_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/contents/{+path}', 'compare_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/compare/{base}...{head}', 'merges_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/merges', 'archive_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/{archive_format}{/ref}', 'downloads_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/downloads', 'issues_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/issues{/number}', 'pulls_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/pulls{/number}', 'milestones_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/milestones{/number}', 'notifications_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/notifications{?since,all,participating}', 'labels_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/labels{/name}', 'releases_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/releases{/id}', 'deployments_url': 'https://api.github.com/repos/kushagras71/AutoEncoders_and_GANs/deployments', 'created_at': '2020-05-23T17:49:37Z', 'updated_at': '2020-08-21T12:15:45Z', 'pushed_at': '2020-08-21T12:15:43Z', 'git_url': 'git://github.com/kushagras71/AutoEncoders_and_GANs.git', 'ssh_url': '[email protected]:kushagras71/AutoEncoders_and_GANs.git', 'clone_url': 'https://github.com/kushagras71/AutoEncoders_and_GANs.git', 'svn_url': 'https://github.com/kushagras71/AutoEncoders_and_GANs', 'homepage': None, 'size': 926, 'stargazers_count': 0, 'watchers_count': 0, 'language': 'Jupyter Notebook', 'has_issues': True, 'has_projects': True, 'has_downloads': True, 'has_wiki': True, 'has_pages': False, 'forks_count': 0, 'mirror_url': None, 'archived': False, 'disabled': False, 'open_issues_count': 0, 'license': {'key': 'apache-2.0', 'name': 'Apache License 2.0', 'spdx_id': 'Apache-2.0', 'url': 'https://api.github.com/licenses/apache-2.0', 'node_id': 'MDc6TGljZW5zZTI='}, 'forks': 0, 'open_issues': 0, 'watchers': 0, 'default_branch': 'master'}]
###Markdown
Example : Using the Twitter APIs
###Code
!python -m pip install twython
###Output
Collecting twython
Downloading twython-3.8.2-py3-none-any.whl (33 kB)
Requirement already satisfied: requests-oauthlib>=0.4.0 in c:\users\kushagra\anaconda3\envs\deep_learning\lib\site-packages (from twython) (1.3.0)
Requirement already satisfied: requests>=2.1.0 in c:\users\kushagra\anaconda3\envs\deep_learning\lib\site-packages (from twython) (2.24.0)
Requirement already satisfied: oauthlib>=3.0.0 in c:\users\kushagra\anaconda3\envs\deep_learning\lib\site-packages (from requests-oauthlib>=0.4.0->twython) (3.1.0)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\users\kushagra\anaconda3\envs\deep_learning\lib\site-packages (from requests>=2.1.0->twython) (1.25.9)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\kushagra\anaconda3\envs\deep_learning\lib\site-packages (from requests>=2.1.0->twython) (2020.6.20)
Requirement already satisfied: idna<3,>=2.5 in c:\users\kushagra\anaconda3\envs\deep_learning\lib\site-packages (from requests>=2.1.0->twython) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in c:\users\kushagra\anaconda3\envs\deep_learning\lib\site-packages (from requests>=2.1.0->twython) (3.0.4)
Installing collected packages: twython
Successfully installed twython-3.8.2
|
Runge_Kutta_Test.ipynb | ###Markdown
Define a function to integrate
###Code
def dfdx(x,f):
return x**2 + x
###Output
_____no_output_____
###Markdown
Define its integral
###Code
def f_int(x,C):
return (x**3)/3. + 0.5*x**2 + C
###Output
_____no_output_____
###Markdown
Define the second-order RK method
###Code
def rk2_core(x_i,f_i,h,g):
# advance f by a step h
# half step
x_ipoh = x_i + 0.5*h
f_ipoh = f_i + 0.5*h*g(x_i,f_i)
#full step
f_ipo = f_i + h*g(x_ipoh,f_ipoh)
return f_ipo
###Output
_____no_output_____
###Markdown
Define a wrapper routine for RK2
###Code
def rk2(dfdx,a,b,f_a,N):
# dfdx is a derivative wrt x
# [a,b] is the respective lower and upper bound
# f_a is the boundary condition
# N is the number of steps
# define our steps
x = np.linspace(a,b,N)
# a single step size
h = x[1] - x[0]
# an array to hold f
f = np.zeros(N,dtype=float)
f[0] = f_a # value of f at a
# evolve f along x
for i in range(1,N):
f[i] = rk2_core(x[i-1],f[i-1],h,dfdx)
return x,f
###Output
_____no_output_____
###Markdown
Define the fourth-order RK method
###Code
def rk4_core(x_i,f_i,h,g):
# define x at 1/2 step
x_ipoh = x_i + 0.5*h
# define x at 1 step
x_ipo = x_i + h
# advance f by a step h
k_1 = h*g(x_i,f_i)
k_2 = h*g(x_ipoh,f_i + 0.5*k_1)
k_3 = h*g(x_ipoh,f_i + 0.5*k_2)
k_4 = h*g(x_ipo,f_i + k_3)
f_ipo = f_i + (k_1 + 2*k_2 + 2*k_3 + k_4)/6.
return f_ipo
###Output
_____no_output_____
###Markdown
Define a wrapper for RK4
###Code
def rk4(dfdx,a,b,f_a,N):
# dfdx is the derivative wrt x
# [a,b] is the upper and lower bounds
# f_a is the boundary condition at a
# N is the number of steps
# define our steps
x = np.linspace(a,b,N)
# a single step size
h = x[1] - x[0]
# an array to hold f
f = np.zeros(N,dtype=float)
f[0] = f_a # value of f at a
# evolve f along x
for i in range(1,N):
f[i] = rk4_core(x[i-1],f[i-1],h,dfdx)
return x,f
###Output
_____no_output_____
###Markdown
Evolve f using rk2 and rk4
###Code
a = 0.0
b = 1.0
f_a = 0.0
N = 10
x_2,f_2 = rk2(dfdx,a,b,f_a,N)
x_4,f_4 = rk4(dfdx,a,b,f_a,N)
x = x_2.copy()
fig = plt.figure(figsize=(7,5))
plt.plot(x_2,f_2,label='RK2')
plt.plot(x_4,f_4,label='RK4')
plt.plot(x,f_int(x,f_a),'ko',label='Analytic')
plt.legend(frameon=False)
plt.xlabel('x')
plt.ylabel('f(x)')
###Output
_____no_output_____
###Markdown
Plot the error
###Code
a = 0.0
b = 1.0
f_a = 0.0
N = 100
x_2,f_2 = rk2(dfdx,a,b,f_a,N)
x_4,f_4 = rk4(dfdx,a,b,f_a,N)
x = x_2.copy()
f_analytic = f_int(x,f_a)
error_2 = (f_2 - f_analytic)/f_analytic
error_4 = (f_4 - f_analytic)/f_analytic
fig = plt.figure(figsize=(7,5))
plt.plot(x_2,error_2,label='RK2')
plt.plot(x_4,error_4,label='RK4')
plt.legend(frameon=False)
plt.xlim([0,1])
plt.ylim([-1.0e-3,1.0e-4])
plt.xlabel('x')
plt.ylabel('f(x)')
###Output
_____no_output_____ |
scientificLibraries/Learn SciPy .ipynb | ###Markdown
Learn SciPy
###Code
import numpy as np
a = np.identity(3)
a
np.random.beta(5, 5, size=3)
from scipy.stats import beta
import matplotlib.pyplot as plt
%matplotlib inline
q = beta(5,5) # Beta(a,b), with a = b = 5
obs = q.rvs(1000) # 2000 observations
grid = np.linspace(0.01, 0.99, 1000)
fig, ax = plt.subplots(figsize=(10,6))
ax.hist(obs, bins=40, normed=True)
ax.plot(grid, q.pdf(grid), 'k-', linewidth=2)
plt.show()
q.cdf(0.2) # funcion de distribucion acumulativa
q.pdf(0.5) # funcion densidad
q.ppf(0.5) # Quantile (inverse cdf) function
q.mean()
obs = beta.rvs(5,5, size=2000)
grid = np.linspace(0.01,0.99,100)
fig, ax = plt.subplots()
ax.hist(obs, bins=40, normed=True)
ax.plot(grid,beta.pdf(grid,5,5),'k-',linewidth=1)
plt.show()
###Output
/home/sjvasconcello/anaconda3/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6462: UserWarning: The 'normed' kwarg is deprecated, and has been replaced by the 'density' kwarg.
warnings.warn("The 'normed' kwarg is deprecated, and has been "
###Markdown
Roots and Fixed Points $$f(x)= \sin(4(x-\frac{1}{4})) + x + x^{20} - 1$$
###Code
f = lambda x: np.sin(4*(x-1/4)) + x + x**20 - 1
x = np.linspace(-1,1,100)
plt.figure(figsize=(5,4))
plt.plot(x,f(x))
plt.axhline(ls='--',c='k')
plt.show()
###Output
_____no_output_____
###Markdown
Bisection
###Code
def bisect(f, a, b, tol=10e-5):
lowers, upper = a, b
while upper - lower > tol:
middle = 0.5 * (upper + lower)
if f(middle) > 0:
lower, upper = lower, middle
else:
lower, upper = middle, upper
return 0.5 * (uppper + lower)
from scipy.optimize import bisect
bisect(f, 0 ,1)
###Output
_____no_output_____
###Markdown
The Newton-Raphson Method
###Code
from scipy.optimize import newton
newton(f, 0.2)
newton(f,0.7)
%timeit bisect(f, 0 ,1)
%timeit newton(f, 0.2)
###Output
38.2 µs ± 328 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
###Markdown
Hybird Methods
###Code
from scipy.optimize import brentq
brentq(f,0,1)
%timeit brentq(f, 0, 1)
###Output
35.2 µs ± 1.05 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
###Markdown
Optimization
###Code
from scipy.optimize import fminbound
fminbound(lambda x: x**2, -1, 2)
###Output
_____no_output_____
###Markdown
Integration
###Code
from scipy.integrate import quad
integral, error = quad(lambda x: x**2,0,1)
integral
def bisect(f,a,b,tol=10e-5):
lower, upper = a, b
if upper - lower < tol:
return 0.5 * (upper - lower)
else:
middle = 0.5 * (upper + lower)
print(f'Current mid point = {middle}')
if f(middle) > 0:
bisect(f, lower, middle)
else:
bisect(f, middle, upper)
f = lambda x: np.sin(4 * (x - 0.25)) + x + x**20 - 1
bisect(f, 0, 1)
###Output
Current mid point = 0.5
Current mid point = 0.25
Current mid point = 0.375
Current mid point = 0.4375
Current mid point = 0.40625
Current mid point = 0.421875
Current mid point = 0.4140625
Current mid point = 0.41015625
Current mid point = 0.408203125
Current mid point = 0.4091796875
Current mid point = 0.40869140625
Current mid point = 0.408447265625
Current mid point = 0.4083251953125
Current mid point = 0.40826416015625
|
getHiddenRep.ipynb | ###Markdown
Use pre-trained Model and get hidden representation.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from SmilesTools import smiUtil as SU
from AE4SmilesLib import CNAE, tbHistoryPlot
sm = SU()
###Output
_____no_output_____
###Markdown
Keep parameters in dictionary **lrep** : hidden rep size **nEL** : number of conv + maxpool block **reg** : activity L1 regulation factor **flt** : number of conv filters per layer **opt** : optimizer to use **ngpu** : number of gpus to use **batch** : minibatch size **EPO** : number of epochs
###Code
bp = {
'lrep' : 145,
'nEL' : 1,
'reg' : 1.0e-9,
'flt' : 32,
'kern' : 5,
'opt' : 'adam',
'ngpu' : 1,
'batch' : 256,
'EPO' : 30
}
bcn = CNAE(sm,**bp)
###Output
_____no_output_____
###Markdown
Network weights need to be created from net of same structure; **_lrep, nEL, flt & kern_** need to be same.
###Code
bcn.loadw('data/test5MCNNv3Co1.hdf5')
dat = pd.read_pickle('data/6MSmiles.pkl')
zinc10k = dat[-10000:]
zinc10k = zinc10k.reset_index(drop=True)
zinc10k.to_csv('data/zinc10k.csv',sep='\t')
k = 2000
zinctst = dat[-k:]
zinctst = zinctst.reset_index(drop=True)
del dat
zinctst.head()
zoh = sm.smi2OH(zinctst)
zhr = bcn.enc.predict(zoh)
zhr.shape
z0 = zhr[0]
z1 = zhr[1]
###Output
_____no_output_____
###Markdown
Define Angular Cosine SimilaritySince hidden representation is generated via ReLu activation, all elements will be >=0, the factor 2.0 is needed.
###Code
def acsim(x1,x2):
cs = np.dot(x1,x2)/np.sqrt(np.dot(x1,x1)*np.dot(x2,x2))
return 1.0 - 2.0*np.arccos(cs)/np.pi
sim = [acsim(z0,z) for z in zhr]
prazosin = pd.DataFrame({'Molecule':'COc1cc2nc(N3CCN(C(=O)c4ccco4)CC3)nc(N)c2cc1OC'},index=[0])
prazosin
prz = sm.smi2OH(prazosin)
przq = bcn.enc.predict(prz)[0]
psim = [acsim(przq,z) for z in zhr]
zinctst['vsPrazosin']=psim
zinctst.head()
cmpPairs = pd.read_csv('data/DM2000Pairs.csv',sep='\t')
cmpPairs.head()
chkp = sm.filterGood(cmpPairs,'Cmpd1')
chkp = sm.filterGood(chkp,'Cmpd2')
chkp.head()
oh1 = sm.smi2OH(chkp,'Cmpd1')
h1 = bcn.enc.predict(oh1)
oh2 = sm.smi2OH(chkp,'Cmpd2')
h2 = bcn.enc.predict(oh2)
pairsim=[acsim(x,y) for x,y in zip(h1,h2)]
pairsim[0:10]
chkp['hsim'] = pairsim
chkp.head(20)
p = np.polyfit(chkp.TS,chkp.hsim,deg=1)
x = chkp.TS
y = p[1] + p[0]*x
plt.plot(chkp.TS,chkp.hsim,'.')
plt.plot(x,y)
import seaborn as sns
x,y = chkp.TS,chkp.hsim
sns.jointplot(x,y,kind='reg')
zincPairs = pd.read_csv('data/Zinc2000Pairs.csv',sep='\t')
zincPairs.head()
oh1 = sm.smi2OH(zincPairs,'Cmpd1')
h1 = bcn.enc.predict(oh1)
oh2 = sm.smi2OH(zincPairs,'Cmpd2')
h2 = bcn.enc.predict(oh2)
zpairsim=[acsim(x,y) for x,y in zip(h1,h2)]
zincPairs['hsim']=zpairsim; zincPairs.head()
sns.jointplot('TS','hsim',data=zincPairs,kind='reg')
dfh1=pd.DataFrame(h1)
dfh1.columns = ['H'+str(c) for c in dfh1.columns]
dfh1.head()
dfzh1 = pd.concat([zincPairs,dfh1],axis=1); dfzh1.head()
dfzh1.drop(columns=['Cmpd2','TS','hsim'],inplace=True); dfzh1.head()
dfzh1.to_csv('data/zincH1.csv',sep='\t')
###Output
_____no_output_____ |
Wavelet QRS Detection.ipynb | ###Markdown
R peak detection in ECG signal using wavelet transformIn this notebook we are going to detect the r peaks in an filtered ecg signal using the stationary wavelet transform.
###Code
import scipy.io as sio
import pywt
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Helper functions
###Code
def next_power_of_2(x):
return 1 if x == 0 else 2**(x - 1).bit_length()
def find(a, func):
return [i for (i, val) in enumerate(a) if func(val)]
###Output
_____no_output_____
###Markdown
Load signal data
###Code
mat = sio.loadmat("data/qrs_detect.mat")
signal = mat['signal']
samplerate = mat['samplerate'][0][0]
L = next_power_of_2(signal.shape[1])
signalECG = np.zeros((L))
signalECG[0:signal.shape[1]] = signal[:]
t = np.arange(0,L)/samplerate
###Output
_____no_output_____
###Markdown
Stationary Wavelet Transformhttps://pywavelets.readthedocs.io/en/latest/ref/swt-stationary-wavelet-transform.html
###Code
# Stationary wavelet transform
swd = pywt.swt(signalECG,'db1', 10)
wavelet_level = int(np.floor(np.log2(samplerate/2/30)))
detailECG = swd[wavelet_level][1]
# Find wavelet maximums
detailAbs = np.abs(detailECG)
detailThres = detailAbs
###Output
_____no_output_____
###Markdown
Find peaks ind detail coefficients
###Code
# Filter wavelet coefficients with threshold (factor*average)
factor = 3
detailThres[detailThres < np.mean(detailAbs)*factor] = 0
detailSlope = np.gradient(detailThres)
# Find maximums of the wavelet coefficients
waveMax = np.zeros((L))
for k in range(1,L-2):
if (detailSlope[k-1] > 0) and (detailSlope[k+1] < 0):
waveMax[k] = 1
# Project maximums to ECG singal (--> R-peaks)
rPeak = np.zeros((L))
window = int(np.round(0.1*samplerate))
for k in range(L-window-1):
if waveMax[k] == 1:
I = np.argmax(np.abs(signalECG[k:k+window]))
rPeak[k+I] = 1
# Eliminate multiple points per peak
QTinterval = 0.35 #QT intervak approx. 350 ms
interval = int(np.round(samplerate * QTinterval))
for k in range(interval, L-interval-1): #Eliminate all but the maximum in the interval
if rPeak[k] == 1:
index = np.argmax(np.abs(rPeak[k-interval:k+interval]))
rPeak[k-interval:k-interval+index-1] = 0
rPeak[k-interval+index+1:k+interval] = 0
###Output
_____no_output_____
###Markdown
Plot results
###Code
wavePoints = find(waveMax, lambda x: x > 0) # indices of the wavelet maximums
rPoints = find(rPeak, lambda x: x > 0) # indices of the r-peak
plt.figure(figsize=(20, 12))
plt.subplot(2,1,1)
plt.plot(t,signalECG,'b');
plt.plot(t[rPoints],signalECG[rPoints],'rs')
plt.xlabel('Time (s)')
plt.ylabel('Signal amplitude')
plt.title('ECG signal with marked R-peaks')
plt.legend(['R-Peaks','ECG signal'])
plt.subplot(2,1,2)
plt.plot(t,detailECG,'black')
plt.plot(t[wavePoints],detailECG[wavePoints],'rs')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.title('Wavelet coefficients')
plt.legend(['Wavelet maximums','Wavelet coefficients'])
plt.plot()
###Output
_____no_output_____ |
Fourier Transform.ipynb | ###Markdown
Complex Numbers Return the angle of `a` in radian.
###Code
a = 1+1j
output = np.angle(a, deg=False)
print(output)
###Output
0.7853981633974483
###Markdown
Return the real part and imaginary part of `a`.
###Code
a = np.array([1+2j, 3+4j, 5+6j])
real = a.real
imag = a.imag
print("real part=", real)
print("imaginary part=", imag)
###Output
real part= [1. 3. 5.]
imaginary part= [2. 4. 6.]
###Markdown
Replace the real part of a with `9`, the imaginary part with `[5, 7, 9]`.
###Code
a = np.array([1+2j, 3+4j, 5+6j])
a.real = 9
a.imag = [5, 7, 9]
print(a)
###Output
[9.+5.j 9.+7.j 9.+9.j]
###Markdown
Return the complex conjugate of `a`.
###Code
a = 1+2j
output = np.conjugate(a)
print(output)
###Output
(1-2j)
###Markdown
Discrete Fourier Transform Compuete the one-dimensional DFT of `a`.
###Code
a = np.exp(2j * np.pi * np.arange(8))
output = np.fft.fft(a)
print(output)
###Output
[ 8.00000000e+00-6.85802208e-15j 2.36524713e-15+9.79717439e-16j
9.79717439e-16+9.79717439e-16j 4.05812251e-16+9.79717439e-16j
0.00000000e+00+9.79717439e-16j -4.05812251e-16+9.79717439e-16j
-9.79717439e-16+9.79717439e-16j -2.36524713e-15+9.79717439e-16j]
###Markdown
Compute the one-dimensional inverse DFT of the `output` in the above question.
###Code
print("a=", a)
inversed = np.fft.ifft(output)
print("inversed=", a)
###Output
a= [1.+0.00000000e+00j 1.-2.44929360e-16j 1.-4.89858720e-16j
1.-7.34788079e-16j 1.-9.79717439e-16j 1.-1.22464680e-15j
1.-1.46957616e-15j 1.-1.71450552e-15j]
inversed= [1.+0.00000000e+00j 1.-2.44929360e-16j 1.-4.89858720e-16j
1.-7.34788079e-16j 1.-9.79717439e-16j 1.-1.22464680e-15j
1.-1.46957616e-15j 1.-1.71450552e-15j]
###Markdown
Compute the one-dimensional discrete Fourier Transform for real input `a`.
###Code
a = [0, 1, 0, 0]
output = np.fft.rfft(a)
print(output)
assert output.size==len(a)//2+1 if len(a)%2==0 else (len(a)+1)//2
# cf.
output2 = np.fft.fft(a)
print(output2)
###Output
[ 1.+0.j 0.-1.j -1.+0.j]
[ 1.+0.j 0.-1.j -1.+0.j 0.+1.j]
###Markdown
Compute the one-dimensional inverse DFT of the output in the above question.
###Code
inversed = np.fft.ifft(output)
print("inversed=", a)
###Output
inversed= [0, 1, 0, 0]
###Markdown
Return the DFT sample frequencies of `a`.
###Code
signal = np.array([-2, 8, 6, 4, 1, 0, 3, 5], dtype=np.float32)
fourier = np.fft.fft(signal)
n = signal.size
freq = np.fft.fftfreq(n, d=1)
print(freq)
###Output
[ 0. 0.125 0.25 0.375 -0.5 -0.375 -0.25 -0.125]
###Markdown
Window Functions
###Code
fig = plt.figure(figsize=(19, 10))
# Hamming window
window = np.hamming(51)
plt.plot(np.bartlett(51), label="Bartlett window")
plt.plot(np.blackman(51), label="Blackman window")
plt.plot(np.hamming(51), label="Hamming window")
plt.plot(np.hanning(51), label="Hanning window")
plt.plot(np.kaiser(51, 14), label="Kaiser window")
plt.xlabel("sample")
plt.ylabel("amplitude")
plt.legend()
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Fourier TransformsThe frequency components of an image can be displayed after doing a Fourier Transform (FT). An FT looks at the components of an image (edges that are high-frequency, and areas of smooth color as low-frequency), and plots the frequencies that occur as points in spectrum.In fact, an FT treats patterns of intensity in an image as sine waves with a particular frequency, and you can look at an interesting visualization of these sine wave components [on this page](https://plus.maths.org/content/fourier-transforms-images).In this notebook, we'll first look at a few simple image patterns to build up an idea of what image frequency components look like, and then transform a more complex image to see what it looks like in the frequency domain.
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the images
image_stripes = cv2.imread('images/stripes.jpg')
# Change color to RGB (from BGR)
image_stripes = cv2.cvtColor(image_stripes, cv2.COLOR_BGR2RGB)
# Read in the images
image_solid = cv2.imread('images/pink_solid.jpg')
# Change color to RGB (from BGR)
image_solid = cv2.cvtColor(image_solid, cv2.COLOR_BGR2RGB)
# Display the images
f, (ax1,ax2) = plt.subplots(1, 2, figsize=(10,5))
ax1.imshow(image_stripes)
ax2.imshow(image_solid)
# convert to grayscale to focus on the intensity patterns in the image
gray_stripes = cv2.cvtColor(image_stripes, cv2.COLOR_RGB2GRAY)
gray_solid = cv2.cvtColor(image_solid, cv2.COLOR_RGB2GRAY)
# normalize the image color values from a range of [0,255] to [0,1] for further processing
norm_stripes = gray_stripes/255.0
norm_solid = gray_solid/255.0
# perform a fast fourier transform and create a scaled, frequency transform image
def ft_image(norm_image):
'''This function takes in a normalized, grayscale image
and returns a frequency spectrum transform of that image. '''
f = np.fft.fft2(norm_image)
fshift = np.fft.fftshift(f)
frequency_tx = 20*np.log(np.abs(fshift))
return frequency_tx
# Call the function on the normalized images
# and display the transforms
f_stripes = ft_image(norm_stripes)
f_solid = ft_image(norm_solid)
# display the images
# original images to the left of their frequency transform
f, (ax1,ax2,ax3,ax4) = plt.subplots(1, 4, figsize=(20,10))
ax1.set_title('original image')
ax1.imshow(image_stripes)
ax2.set_title('frequency transform image')
ax2.imshow(f_stripes, cmap='gray')
ax3.set_title('original image')
ax3.imshow(image_solid)
ax4.set_title('frequency transform image')
ax4.imshow(f_solid, cmap='gray')
###Output
_____no_output_____
###Markdown
Low frequencies are at the center of the frequency transform image. The transform images for these example show that the solid image has most low-frequency components (as seen by the center bright spot). The stripes tranform image contains low-frequencies for the areas of white and black color and high frequencies for the edges in between those colors. The stripes transform image also tells us that there is one dominating direction for these frequencies; vertical stripes are represented by a horizontal line passing through the center of the frequency transform image.Next, let's see what this looks like applied to a real-world image.
###Code
# Read in an image
image = cv2.imread('images/birds.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# normalize the image
norm_image = gray/255.0
f_image = ft_image(norm_image)
# Display the images
f, (ax1,ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.imshow(image)
ax2.imshow(f_image, cmap='gray')
###Output
_____no_output_____
###Markdown
1D Discrete Fourier Transform
###Code
import cv2
import matplotlib.pyplot as plt
import numpy as np
#Input rectangular function
x = np.arange(-3, 3, 0.01)
y = np.zeros(len(x))
y[200:400] = 1
plt.plot(y)
plt.show()
#Fourier Transform
yShift = np.fft.fftshift(y)
fftyShift = np.fft.fft(yShift)
ffty = np.fft.fftshift(fftyShift)
plt.plot(ffty)
plt.show()
###Output
/home/orkhan/.virtualenvs/computer-vision-tutorial/lib/python3.8/site-packages/matplotlib/cbook/__init__.py:1333: ComplexWarning: Casting complex values to real discards the imaginary part
return np.asarray(x, float)
###Markdown
2D Image Fourier Transform with Numpy and Opencv
###Code
def show(img):
cv2.imshow("Image", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
#Generate a 2D sine wave image
x = np.arange(256) #generate values from 0 to 255 (our image size)
frequency = 30 #set to smaller number to increase the frequency
y = np.sin(2 * np.pi * x / frequency) # calculate sine of x values
#Offset sine wave by the max value to go out of negative range of sine
y += max(y)
#Generate a 256x256 image (2D array of the sine wave)
img = np.array([[y[j]*127 for j in range(256)] for i in range(256)], dtype=np.uint8)
plt.imshow(img, cmap="gray")
def visualize_magnitude_spectrum(dft):
magnitude_spectrum = 20 * np.log(np.abs(dft))
magnitude_spectrum = np.int8(magnitude_spectrum)
plt.imshow(magnitude_spectrum, cmap="gray")
###Output
_____no_output_____
###Markdown
Numpy way
###Code
#Get complext number fourier transform which has vector k - frequency and orientation
numpy_dft = np.fft.fft2(img)
#Shift low frequency to the center
numpy_dft_shift = np.fft.fftshift(numpy_dft)
#Only for visualization purpose
visualize_magnitude_spectrum(numpy_dft_shift)
###Output
/tmp/ipykernel_12161/881733894.py:2: RuntimeWarning: divide by zero encountered in log
magnitude_spectrum = 20 * np.log(np.abs(dft))
###Markdown
OpenCV way
###Code
#Apply Discrete Fourier Transform with Opencv, make sure image type converted to float32
#will output Complex numbers - Complex output returns k vector with magnitude and orientation
dft = cv2.dft(np.float32(img), flags=cv2.DFT_COMPLEX_OUTPUT)
#Shift low frequency to the center
dft_shift = np.fft.fftshift(dft)
#To visualize returned magnitude spectrum we use log
#dft_shift[:,:,0] - real number, dft_shift[:,:,1] - imaginary number
magnitude_spectrum = 20 * np.log((cv2.magnitude(dft_shift[:,:,0], dft_shift[:,:,1])))
plt.imshow(magnitude_spectrum.astype(np.int8), cmap="gray")
###Output
/tmp/ipykernel_12161/1075862338.py:5: RuntimeWarning: divide by zero encountered in log
magnitude_spectrum = 20 * np.log((cv2.magnitude(dft_shift[:,:,0], dft_shift[:,:,1])))
###Markdown
Low Pass Filter - masking high freq area - blurring image
###Code
img = cv2.imread('../assets/moon.png', 0) # load an image
#Output is a 2D complex array. 1st channel real and 2nd imaginary
#For fft in opencv input image needs to be converted to float32
dft = cv2.dft(np.float32(img), flags=cv2.DFT_COMPLEX_OUTPUT)
#Rearranges a Fourier transform X by shifting the zero-frequency
#component to the center of the array.
#Otherwise it starts at the tope left corenr of the image (array)
dft_shift = np.fft.fftshift(dft)
##Magnitude of the function is 20.log(abs(f))
#For values that are 0 we may end up with indeterminate values for log.
#So we can add 1 to the array to avoid seeing a warning.
magnitude_spectrum = 20 * np.log(cv2.magnitude(dft_shift[:, :, 0], dft_shift[:, :, 1]))
# Circular LPF mask, center circle is 1, remaining all zeros
# Only allows low frequency components - smooth regions
#Can smooth out noise but blurs edges.
rows, cols = img.shape
crow, ccol = int(rows / 2), int(cols / 2)
mask = np.zeros((rows, cols, 2), np.uint8)
r = 40
center = [crow, ccol]
x, y = np.ogrid[:rows, :cols]
mask_area = (x - center[0]) ** 2 + (y - center[1]) ** 2 <= r*r
mask[mask_area] = 1
#Mask image
plt.imshow(mask[:,:, 0], cmap="gray")
# apply mask and inverse DFT: Multiply fourier transformed image (values)
#with the mask values.
fshift = dft_shift * mask
#Get the magnitude spectrum (only for plotting purposes)
fshift_mask_mag = 20 * np.log(cv2.magnitude(fshift[:, :, 0], fshift[:, :, 1]))
#Inverse shift to shift origin back to top left.
f_ishift = np.fft.ifftshift(fshift)
#Inverse DFT to convert back to image domain from the frequency domain.
#Will be complex numbers
img_back = cv2.idft(f_ishift)
#Magnitude spectrum of the image domain
img_back = cv2.magnitude(img_back[:, :, 0], img_back[:, :, 1])
fig = plt.figure(figsize=(12, 12))
ax1 = fig.add_subplot(2,2,1)
ax1.imshow(img, cmap='gray')
ax1.title.set_text('Input Image')
ax2 = fig.add_subplot(2,2,2)
ax2.imshow(magnitude_spectrum, cmap='gray')
ax2.title.set_text('FFT of image')
ax3 = fig.add_subplot(2,2,3)
ax3.imshow(fshift_mask_mag, cmap='gray')
ax3.title.set_text('FFT + Mask')
ax4 = fig.add_subplot(2,2,4)
ax4.imshow(img_back, cmap='gray')
ax4.title.set_text('After inverse FFT')
plt.show()
###Output
/tmp/ipykernel_12161/1473198909.py:2: RuntimeWarning: divide by zero encountered in log
fshift_mask_mag = 20 * np.log(cv2.magnitude(fshift[:, :, 0], fshift[:, :, 1]))
###Markdown
High Pass Filter - masking low frea area center - detecting edges
###Code
# Circular HPF mask, center circle is 0, remaining all ones
#Can be used for edge detection because low frequencies at center are blocked
#and only high frequencies are allowed. Edges are high frequency components.
#Amplifies noise.
rows, cols = img.shape
crow, ccol = int(rows / 2), int(cols / 2)
mask = np.ones((rows, cols, 2), np.uint8)
r = 20
center = [crow, ccol]
x, y = np.ogrid[:rows, :cols]
mask_area = (x - center[0]) ** 2 + (y - center[1]) ** 2 <= r*r
mask[mask_area] = 0
plt.imshow(mask[:, :, 0], cmap="gray")
# apply mask and inverse DFT: Multiply fourier transformed image (values)
#with the mask values.
fshift = dft_shift * mask
#Get the magnitude spectrum (only for plotting purposes)
fshift_mask_mag = 20 * np.log(cv2.magnitude(fshift[:, :, 0], fshift[:, :, 1]))
#Inverse shift to shift origin back to top left.
f_ishift = np.fft.ifftshift(fshift)
#Inverse DFT to convert back to image domain from the frequency domain.
#Will be complex numbers
img_back = cv2.idft(f_ishift)
#Magnitude spectrum of the image domain
img_back = cv2.magnitude(img_back[:, :, 0], img_back[:, :, 1])
fig = plt.figure(figsize=(12, 12))
ax1 = fig.add_subplot(2,2,1)
ax1.imshow(img, cmap='gray')
ax1.title.set_text('Input Image')
ax2 = fig.add_subplot(2,2,2)
ax2.imshow(magnitude_spectrum, cmap='gray')
ax2.title.set_text('FFT of image')
ax3 = fig.add_subplot(2,2,3)
ax3.imshow(fshift_mask_mag, cmap='gray')
ax3.title.set_text('FFT + Mask')
ax4 = fig.add_subplot(2,2,4)
ax4.imshow(img_back, cmap='gray')
ax4.title.set_text('After inverse FFT')
plt.show()
###Output
/tmp/ipykernel_12161/1745291996.py:2: RuntimeWarning: divide by zero encountered in log
fshift_mask_mag = 20 * np.log(cv2.magnitude(fshift[:, :, 0], fshift[:, :, 1]))
###Markdown
Band Pass Filter **The Band-Pass Filter will allow you to reduce the frequencies outside of a defined range of frequencies. We can think of it as low-passing and high-passing at the same time.**
###Code
# Band Pass Filter - Concentric circle mask, only the points living in concentric circle are ones
rows, cols = img.shape
crow, ccol = int(rows / 2), int(cols / 2)
mask = np.zeros((rows, cols, 2), np.uint8)
r_out = 80
r_in = 10
center = [crow, ccol]
x, y = np.ogrid[:rows, :cols]
mask_area = np.logical_and(((x - center[0]) ** 2 + (y - center[1]) ** 2 >= r_in ** 2),
((x - center[0]) ** 2 + (y - center[1]) ** 2 <= r_out ** 2))
mask[mask_area] = 1
plt.imshow(mask[:,:,0], cmap="gray")
# apply mask and inverse DFT: Multiply fourier transformed image (values)
#with the mask values.
fshift = dft_shift * mask
#Get the magnitude spectrum (only for plotting purposes)
fshift_mask_mag = 20 * np.log(cv2.magnitude(fshift[:, :, 0], fshift[:, :, 1]))
#Inverse shift to shift origin back to top left.
f_ishift = np.fft.ifftshift(fshift)
#Inverse DFT to convert back to image domain from the frequency domain.
#Will be complex numbers
img_back = cv2.idft(f_ishift)
#Magnitude spectrum of the image domain
img_back = cv2.magnitude(img_back[:, :, 0], img_back[:, :, 1])
fig = plt.figure(figsize=(12, 12))
ax1 = fig.add_subplot(2,2,1)
ax1.imshow(img, cmap='gray')
ax1.title.set_text('Input Image')
ax2 = fig.add_subplot(2,2,2)
ax2.imshow(magnitude_spectrum, cmap='gray')
ax2.title.set_text('FFT of image')
ax3 = fig.add_subplot(2,2,3)
ax3.imshow(fshift_mask_mag, cmap='gray')
ax3.title.set_text('FFT + Mask')
ax4 = fig.add_subplot(2,2,4)
ax4.imshow(img_back, cmap='gray')
ax4.title.set_text('After inverse FFT')
plt.show()
###Output
/tmp/ipykernel_12161/1473198909.py:2: RuntimeWarning: divide by zero encountered in log
fshift_mask_mag = 20 * np.log(cv2.magnitude(fshift[:, :, 0], fshift[:, :, 1]))
|
examples/Defining your own problem.ipynb | ###Markdown
Defining your own problemThis notebook shows you how to define your own differential equation problem and solve it using FBPINNs (and compare to a standard PINN). Problem overviewThe example problem we will define and solve here is the 1D damped harmonic oscillator:$$m \dfrac{d^2 x}{d t^2} + \mu \dfrac{d x}{d t} + kx = 0~,$$with the initial conditions$$x(0) = 1~~,~~\dfrac{d x}{d t} = 0~.$$We will focus on solving the problem for the under-damped state, i.e. when $$\delta < \omega_0~,~~~~~\mathrm{with}~~\delta = \dfrac{\mu}{2m}~,~\omega_0 = \sqrt{\dfrac{k}{m}}~.$$This has the following exact solution:$$x(t) = e^{-\delta t}(2 A \cos(\phi + \omega t))~,~~~~~\mathrm{with}~~\omega=\sqrt{\omega_0^2 - \delta^2}~.$$This problem was inspired by the following blog post: https://beltoforion.de/en/harmonic_oscillator/ (image credits: https://commons.wikimedia.org/wiki/User:Jahobr) Workflow overviewThe workflow for defining and solving your own problem consists of the following steps:1. Define your own custom `Problem` class2. Initialise a `Constants` object with this new problem3. Train a FBPINN / PINN using this `Constants` object
###Code
import numpy as np
import torch
import matplotlib.pyplot as plt
import sys
sys.path.insert(0, '../fbpinns/')
import problems
import losses
import boundary_conditions
import constants
import active_schedulers
import main
###Output
_____no_output_____
###Markdown
1. Define your own custom `Problem` classFirst, you must define your own custom problem class that defines the differential equation problem. This problem class should be passed to the main trainer classes (`main.FBPINNTrainer` and `main.PINNTrainer`) via the `constants.Constants` object when training FBPINNs and PINNs. Base `Problem` classAll problem classes must inherit the `problems._Problem` base class and define the following methods:```pythonclass _Problem: "Base problem class to be inherited by different problem classes" @property def name(self): "Defines a name string (only used for labelling automated training runs)" raise NotImplementedError def __init__(self): raise NotImplementedError def physics_loss(self, x, *yj): "Defines the PINN physics loss to train the NN" raise NotImplementedError def get_gradients(self, x, y): "Returns the gradients yj required for this problem" def boundary_condition(self, x, *yj, args): "Defines the hard boundary condition to be applied to the NN ansatz" raise NotImplementedError def exact_solution(self, x, batch_size): "Defines exact solution if it exists" return None``` Description of required methodsA description of the inputs and outputs of each method is below:``def name(self):``This should return a string and is a helper method which can be used for automated naming of training runs.``def __init__(self):``This method initialises the problem class. The only requirement of this method is that it should define an attribute `self.d = (d, d_u)` which is a tuple of two integers which defines the dimensionality of the input variable ($x$) and the solution ($u(x)$). This is used when initialising the subdomain neural networks and other parts of the training code. You can also optionally use this method to store any other useful variables too.``def physics_loss(self, x, *yj):``This method defines the physics loss function used to train the FBPINN / PINN. Because we use a hard constraining operator in the solution ansatz, we only need to use this physics loss to train FBPINNs / PINNs. This method is passed `torch.Tensor` batches of input variables, the approximate FBPINN / PINN solution and its gradients (calculated by `self.get_gradients` below), and it should return a single scalar that penalises the residual of the underlying differential equation. This method depends on the specific problem!``def get_gradients(self, x, y):``This method computes the relevant gradients which are required to evaluate the physics loss above. This method is passed `torch.Tensor` batches of input variables and the approximate FBPINN / PINN solution, and it should use `torch.autograd.grad` to compute the solution gradients and return these (as well as the solution tensor) as an output tuple. Make sure you use the `create_graph=True` option in `torch.autograd.grad` so that the gradient graph is tracked and can be backpropagated through when updating the network weights.``def boundary_condition(self, x, *yj, args):`` This method applies the hard constraining operator to the FBPINN / PINN solution and its gradients. The constraining operator ensures the boundary conditions are satisfied and depends on the specific problem. This method is passed `torch.Tensor` batches of input variables, the approximate FBPINN / PINN solution and its gradients (calculated by `self.get_gradients` above), and any other arguments required to apply the constraining operator, and it should return the approximate FBPINN / PINN solution and its gradients with the constraining operator applied. Typically this requires the use of the product rule to update the gradients, see the example class below for an example. The `boundary_conditions` module also contains helper functions for applying constraining operators.``def exact_solution(self, x, batch_size):`` This method computes the exact solution (if it exists) which is used to compute the test loss and to compare the FBPINN / PINN solution to when plotting the results. This method is passed a `torch.Tensor` batch of input variables and the shape of this tensor, and it should return the exact solution and its relevant gradients (matching those computed by `self.get_gradients`). Example harmonic oscillator problem classWe set up the example `HarmonicOscillator1D` problem class below, which defines all of the methods above:
###Code
class HarmonicOscillator1D(problems._Problem):
"""Solves the 1D ODE:
d^2 u du
m ----- + mu -- + kx = 0
dx^2 dx
with the boundary conditions:
u (0) = 1
u'(0) = 0
"""
@property
def name(self):
return "HarmonicOscillator_%s-%s"%(self.delta, self.w0)# can be used for automatic labeling of runs
def __init__(self, delta, w0):
self.d = (1,1)# dimensionality of input variables and solution (d, d_u)
# we also store some useful problem variables too
self.delta, self.w0 = delta, w0
self.mu, self.k = 2*delta, w0**2# invert for mu, k given delta, w0 and fixing m=1 (without loss of generality)
def physics_loss(self, x, y, j, jj):
physics = jj + self.mu*j + self.k*y
return losses.l2_loss(physics, 0)
def get_gradients(self, x, y):
# for this problem we require j = du/dx and jj = d^2u/dx^2
j = torch.autograd.grad(y, x, torch.ones_like(y), create_graph=True)[0]
jj = torch.autograd.grad(j, x, torch.ones_like(j), create_graph=True)[0]
return y, j, jj
def boundary_condition(self, x, y, j, jj, sd):
# for this problem the boundary conditions are: u(0) = 1, u'(0) = 0. To satisy these constraints, we use
# the following constrained solution ansatz:
# u = 1 + tanh^2((x-0)/sd)*NN
t2, jt2, jjt2 = boundary_conditions.tanh2_2(x,0,sd)# use the helper boundary_conditions module to get gradients of tanh^2
y_new = t2*y + 1
j_new = jt2*y + t2*j# apply product rule
jj_new = jjt2*y + 2*jt2*j + t2*jj# apply product rule
return y_new, j_new, jj_new
def exact_solution(self, x, batch_size):
# we calculate the exact solution as derived in https://beltoforion.de/en/harmonic_oscillator/
# we assume the boundary conditions are u(0) = 1, u'(0) = 0
d,w0 = self.delta, self.w0
if d < w0: # underdamped case
w = np.sqrt(w0**2-d**2)
phi = np.arctan(-d/w)
A = 1/(2*np.cos(phi))
cos = torch.cos(phi+w*x)
sin = torch.sin(phi+w*x)
exp = torch.exp(-d*x)
y = exp*2*A*cos
j = exp*2*A*(-d*cos-w*sin)
jj = exp*2*A*((d**2-w**2)*cos+2*d*w*sin)
elif d == w0: # critically damped case
A,B = 1,d
exp = torch.exp(-d*x)
y = exp*(A+x*B)
j = -d*y + B*exp
jj = (d**2)*y - 2*d*B*exp
else: # overdamped case
a = np.sqrt(d**2-w0**2)
d1, d2 = a-d, -a-d
A = -d2/(2*a)
B = d1/(2*a)
exp1 = torch.exp(d1*x)
exp2 = torch.exp(d2*x)
y = A*exp1 + B*exp2
j = d1*A*exp1 + d2*B*exp2
jj = (d1**2)*A*exp1 + (d2**2)*B*exp2
return y, j, jj
###Output
_____no_output_____
###Markdown
2. Initialise a `Constants` object with this new problemNext we initialise a `constants.Constants` object with this new problem class, as well as defining other appropriate problem parameters (domain, subdomains, training schedulers, etc) to train the FBPINN / PINN.
###Code
P = HarmonicOscillator1D(delta=0.2, w0=5)# underdamped
#P = HarmonicOscillator1D(delta=3, w0=2)# overdamped
#P = HarmonicOscillator1D(delta=2, w0=2)# critically damped
c1 = constants.Constants(
RUN="FBPINN_%s"%(P.name),
P=P,
SUBDOMAIN_XS=[np.arange(0,11,1)],
SUBDOMAIN_WS=[0.8*np.ones(11)],
BOUNDARY_N=(1/P.w0,),
Y_N=(-1,1),
ACTIVE_SCHEDULER=active_schedulers.AllActiveSchedulerND,
ACTIVE_SCHEDULER_ARGS=(),
N_HIDDEN=16,
N_LAYERS=2,
BATCH_SIZE=(400,),
N_STEPS=10000,
BATCH_SIZE_TEST=(1000,),
PLOT_LIMS=(1.2, False),
CLEAR_OUTPUT=True,
)
c2 = constants.Constants(
RUN="PINN_%s"%(P.name),
P=P,
SUBDOMAIN_XS=[np.arange(0,11,1)],
BOUNDARY_N=(1/P.w0,),
Y_N=(-1,1),
N_HIDDEN=32,
N_LAYERS=3,
BATCH_SIZE=(400,),
N_STEPS=10000,
BATCH_SIZE_TEST=(1000,),
PLOT_LIMS=(1.2, False),
CLEAR_OUTPUT=True,
)
###Output
_____no_output_____
###Markdown
3. Train a FBPINN / PINN using this `Constants` objectFinally, we train a FBPINN / PINN using this `Constants` object. We find that for the underdamped case with `delta=0.2, w0=5` the FBPINN with 10 subdomains converges with less training steps than the PINN.
###Code
# train FBPINN
run = main.FBPINNTrainer(c1)
run.train()
# compare to PINN
run = main.PINNTrainer(c2)
run.train()
# finally, compare runs by plotting saved test losses
fbpinn_loss = np.load("results/models/%s/loss_%.8i.npy"%(c1.RUN, 10000))
pinn_loss = np.load("results/models/%s/loss_%.8i.npy"%(c2.RUN, 10000))
plt.figure(figsize=(7,5))
plt.plot(fbpinn_loss[:,0], fbpinn_loss[:,3], label=c1.RUN)
plt.plot(pinn_loss[:,0], pinn_loss[:,3], label=c2.RUN)
plt.yscale("log")
plt.xlabel("Training step")
plt.ylabel("L1 loss")
plt.legend()
plt.title("Test loss")
plt.show()
###Output
_____no_output_____ |
content/Chapter_12/04_Heavy_Tails.ipynb | ###Markdown
Heavy Tails This short section shows an example of how expectations and SDs, though very useful in many situations, aren't quite adequate when distributions have long, fat tails. Here is one such distribution.
###Code
N = 1000
n = np.arange(1, N+1, 1)
probs = (1/n)*(1/np.sum(1/n))
dist = Table().values(n).probability(probs)
Plot(dist)
plt.xlim(0, N/10);
###Output
_____no_output_____
###Markdown
You can see that the tail stretches out quite far. If we sample independently from this population, how does the sample average behave? Averages are affected by values out in the tails. Let's simulate the distribution of the average of a random sample of size 500 from this distribution. We'll do 10,000 repetitions to try to get the empirical distribution to settle down.
###Code
means = make_array()
for i in range(10000):
means = np.append(means, np.mean(dist.sample_from_dist(500)))
Table().with_column('Sample Means', means).hist(bins=20)
###Output
/Users/dominiccroce/anaconda3/envs/textbook/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6462: UserWarning: The 'normed' kwarg is deprecated, and has been replaced by the 'density' kwarg.
warnings.warn("The 'normed' kwarg is deprecated, and has been "
###Markdown
That's a lovely distribution, but take a look at where it is centered. The center is just above 130, whereas the original distribution looked as though it was petering out at about 100:
###Code
Plot(dist)
plt.xlim(0, N/10);
###Output
_____no_output_____
###Markdown
This is where we have to remember that the original disribution actually goes out to 1000. Even though the tail is hardly visible beyond 100 on the scale of our graph, it is there and it is affecting the expectation. The expected value is about 133.6, which explains the center of the empirical distribution of the sample average.
###Code
dist.ev()
###Output
_____no_output_____
###Markdown
It is sobering to realize that the balance point of the above histogram isn't even visible on the graph. There is enough mass far out in the tails to pull the balance point away to the right.How do we reconcile this with Chebyshev's Inequality telling us that the bulk of the probability is within a few SDs of the mean? The only way to find out is to calculate the SD of the distribution.
###Code
dist.sd()
###Output
_____no_output_____
###Markdown
And there we have it. The SD is huge, even bigger than the mean. The long tail makes the SD very large – so large that even the interval "expected value plus or minus one SD" is extremely wide and contains almost all the data.To analyze heavy-tailed distributions like this, the expected value and SD aren't the best quantities to use. There is a large and growing literature on what should be used instead. You might come across it in a more advanced course. Zipf's Law You are almost certain to come across distributions like these if you study natural language processing, or linguistics, or economics, or even the populations of cities. The example used in this section is one of the *Zipf* distributions that occurs in those fields.[Zipf's Law](https://en.wikipedia.org/wiki/Zipf's_law) is an empirically observed law that says that in large bodies of words, the frequency of a word is inversely proportional to its rank in a frequency table. That is, the frequency of the second most commonly occurring word is half the frequency of the most frequent. The frequency of the third most commonly occurring word is one-third of the frequency of the most frequent. And so on.According to Wikipedia, "... in the Brown Corpus of American English text, the word "the" is the most frequently occurring word, and by itself accounts for nearly 7% of all word occurrences (69,971 out of slightly over 1 million). True to Zipf's Law, the second-place word "of" accounts for slightly over 3.5% of words (36,411 occurrences), followed by "and" (28,852). Only 135 vocabulary items are needed to account for half the Brown Corpus." Now take another look at how the underlying distribution in our example was defined:
###Code
N = 1000
n = np.arange(1, N+1, 1)
probs = (1/n)*(1/np.sum(1/n))
###Output
_____no_output_____ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.